Fashionable Nonsense

If enough of us band together and decide we don’t have to wear pants any more, we won’t have to do it … Pants are designed to hold you back from achieving all your hopes and dreams.
—Matt Bellassai “Reasons Wearing Pants Is The Absolute Worst”   Buzzfeed Video  

How did an organ-squeezing belly tourniquet become part of our everyday wardrobe—and what other suboptimal solutions do we routinely put up with?
—Stephen Dubner. “How Did The Belt Win?” Freakonomics Radio

The older and less successful I get, the more I think that the twentieth century never happened. In far-off Abu Dhabi, where the European Tour is playing this week—one can only imagine the security—the big “news” is whether science is important and how high hemlines can go. The leader after the first day was the 22 year-old winner of last year’s U.S. Amateur, Bryson DeChambeau, best-known for his odd clubs, which are each the same length, and his odd opinions—which are, it seems, equally longsighted. “I’m a golfing scientist,” DeChambeau told the world’s golf press there after shooting a low 64 the first day, before going on to compare himself implicitly to both George Washington and Einstein. Also in Abu Dhabi, the Euro Tour announced it would break with tradition and allow its members to play pro-am rounds in shorts. The reactions were predictable: “Pants on a golfer, for whatever reason, add a certain gravitas,” grumbled the architect/blogger Geoff Shackelford, while controversial Ian Poulter countered with “I mean, its 2016.” It’s unknown whether Poulter frequents Negro jazz cafes or can do the Charleston, although he certainly knows his way about a bob. The coming of DeChambeau however, may foretell more than simply the return of a combination of economic oppression and wild excess not seen—outside the pages of The Great Gatsby—in nearly a century.

Hence, I don’t really care about the hemline question, as amusing as it may be to witness grown men discuss fashion with all the seriousness of the Dowager Countess. But DeChambeau’s story is, I think, riveting, and not just because he also won the NCAA championships in addition to winning the Amateur last year, making him only the fifth golfer in history to win both in the same year. (The others are known hacks Jack Nicklaus, Phil Mickelson, Tiger Woods, and Ryan Moore.) Yet despite that kind of talent he still hasn’t turned professional, although he will likely have to soon: his school, Southern Methodist University, has been banned from next year’s NCAA championships for “recruiting and unethical conduct in the men’s golf program,” according to the NCAA. DeChambeau, however, is prepared for life after college golf: unusually for a student athlete with his kind of game, DeChambeau’s major at Southern Methodist University isn’t “physical education” or the old student-athlete standby, “business.” Instead, it’s physics.

Hence, the headline at Golf.com after the first round—in what might be one of the greatest feats of headline writing, golf division, in at least a couple of weeks—was “Abu Dhabi Leader DeChambeau Compares Himself To Einstein.” After his round, the young man told the media that when he first used what Golf calls “his unique swing” in 2011, he thought to himself “This could change golf,” and then said people “like Einstein and George Washington”—surely the first time both of those names have been mentioned in the same sentence by a professional golfer at a press conference—“just … capitalized on their differences and showed the world a little different side.” Not the most articulate explanation of the two historical figures to be sure—it recalls Rodney Dangerfield’s immortal line from Caddyshack: “This was invented by my friend Albert Einstein. Great guy. Made a fortune in physics.”—but it has the great advantage of being both hilariously ridiculous and, well, new.

Which is odd, because in another sense what DeChambeau represents is something absurdly old: as Ryan Lavner put it in his Golf Channel report on DeChambeau last summer, the coverage of DeChambeau after his Amateur win “portrayed [him] as an obsessive-compulsive, numbers-crazed techie who dissects a golf course.” For the past sixty years or so, at least, such a type of person has been considered anachronistic: University of Chicago PhD. Thomas Frank, for example, wrote a book twenty years ago entitled The Conquest of Cool about how (as one reviewer—Dan Geddes—summarized the point), “throughout the Fifties a general revulsion against the stultifying demands of consumer culture grew,” and industries like advertising and—interestingly from the point of view of golf’s present hemline debate—men’s fashion changed with that revulsion. Advertising, for example, experienced what’s become known as the “Creative Revolution”: “abandoning the scientific advertising of the 1950s,” advertising began “trying to portray their products as engines of youth and rebellion.” In this way, business constructed the present “hip versus square” battle that, for the most part, drives all of our lives in the present.

Bryson DeChambeau however doesn’t care about that: his “Bible” is, as Lavner notes, a book entitled The Golfing Machine, a strange book written by an engineer named Homer Kelley and first published to obscurity in 1982—eleven years before DeChambeau was born. Like the Bible, The Golfing Machine is a book that has begun to attract a body of scholarship: not long ago, a senior writer for Golf magazine named Scott Gummer wrote a book entitled Homer Kelley’s Golfing Machine: The Curious Quest That Solved Golf. (From a professional point of view, Gummer’s tome is a kind of both production and reception history.) The Golfing Machine is, apparently, a book that appeals to the logical sort of mind: in the foreword—another piece of critical apparatus—he wrote to Gummer’s book about Kelley’s book, PGA Tour player and winner of the 1995 PGA Championship, Steve Elkington, said that Kelley’s work appealed to something deep within his brain: “I always sensed,” Elkington wrote, “that the explanation for how to create a mechanically sound stroke could be found in math and science.” The Golfing Machine is a book known for being deeply “scientific”—to the point where it may even be difficult to understand.

Yet despite the book’s known difficulty—according to Rick Lipsey at Sports Illustrated, the book “reads like a physics textbook, which in a sense it is”—Lavner tells us that “DeChambeau has always wanted to model his game after” Kelley’s since his teacher, Mike Schy, gave it to him when he was 15 years old. One of his opponents during the match play portion of the U.S. Amateur, Maverick McNealy—himself a student of “industrial engineering that is modified to add computer science and statistics, with a specific concentration in financial and decision engineering” at Stanford, and the son of the co-founder of Sun Microsystems—called DeChambeau “very analytical and calculating.” But according to Lavner, the “aspect of his game that doesn’t receive as much attention—or credit—is DeChambeau’s imagination.” To Lavner, it seems, imagination is somehow different from, and opposed to, the analytical.

Presumably, Lavner is depending for his thought on arguments proposed by students of the humanities—like, say, English professor Stanley Fish. Fish might find my description of The Golfing Machine as DeChambeau’s “Bible” incredibly revealing: he is after all, well-known for his clashes with people like biologist Richard Dawkins. Just a few years ago, in the New York Times, for example, Fish once picked up on a phrase Dawkins dropped in a television interview about how, while religious adherents can only fall back on biblical authority, “in the arena of science … ‘you can actually cite chapter and verse’”—that is, anyone can consult the relevant studies. It’s the sort of argument that the “analytical” DeChambeau might be thought to find extremely appealing—in science, the claim goes, anyone could pick up the relevant thread and see for herself.

Fish however will have none of it: according to him, all this science talk means is that we “still cite chapter and verse—we still operate on trust—but the scripture has changed … and is now identified with the most up-to-date research conducted by credentialed and secular investigators.” The arguments of scientific people that, as Fish has it, the “chapter and verse of scriptural citation”—that is, the actual Bible—“is based on nothing but subjective faith,” whereas “the chapter and verse of scientific citation is based on facts and evidence” can be met, according to Fish, with the (by now tiresomely familiar to some, still incredible to others) argument that (as Fish argues elsewhere) that “the act of observing can itself only take place within hypotheses (about the way the world is) that cannot be observation’s objects because it is within them that observation and reasoning occur.” Hence, to Fish, “the rhetoric of disinterested inquiry, as retailed by the likes of Dawkins … is in fact a very interested assertion of the superiority of one set of beliefs.” Science, in other words, is simply another species of rhetoric, and so hence there is necessarily a distinction between science and other forms of inquiry, which is also to say that the analytical must be distinguished from the imaginative.

In sum, Fish is arguing that, as he says, “despite invocations of fairness and equality and giving every voice a chance,” in reality people who advocate for scientific beliefs “divide the world into ‘us’ and ‘them.’” They are just another tribe in a world that, Fish suggests, is ruled by tribal identities above everything. In this case, the “tribes” seem to be those who believe in “data” and “evidence” and “logic” and so forth—the quotes are there because it is precisely the status of these items that is in dispute—and those who believe in “imagination” and “creativity” and so on. As if, one supposes, that science is conducted by rote, and that whatever is done in the humanities has nothing whatever to do with facts or evidence.

This distinction, however, is not one that’s always been believed: take, for example, the obituary published by the famous old American magazine called the New Republic for James Joyce, the noted Irish novelist—and another author, like Homer Kelley, known for the density, and near-unreadability, of his work. As a study of Joyce’s books in Significance, the website of the Royal Statistical Society, has remarked, Joyce’s two chief works—Ulysses and Finnegans Wake—“have often been described as difficult, and particularly the latter as unreadable or worse.” (Though Joyce is also usually thought to have written what many think of as what the New York Times has called “just about the finest short story in the English language”—the story at the end of Joyce’s collection, Dubliners, entitled “The Dead.”) Like Kelley, though to a much greater extent of course, Joyce has attracted a large critical following: the number of books about Joyce’s books must number in the thousands.

The reason for the interest Joyce’s work has undoubtedly attracted is often ascribed to the particular ferocity with which Joyce defended a notion of the artist as “the priest of the imagination,” as he called it. In Joyce’s conception, the artist is the creator of “epiphanies,” by which he meant the “outward and visible sign of an inner and spiritual grace”; the chief of the tools Joyce is known to have used to create these epiphanies is the technique known as “stream-of-consciousness”—a technique that Joyce himself traced to an 1888 French novel entitled Les Lauriers sont coupés, but that most scholars believe long predated that work. (Though they also for the most part agree that, as one critic has put it, “it had not been employed previously in English on the scale, or with the flexibility” with which Joyce used it.) In the stream-of-consciousness technique, Joyce drew upon the work of the philosopher Henri Bergson, whose work “emphasized the difference between scientific, clock time and the direct, subjective, human experience of time,” as one commenter has put it. James Joyce might then be taken as the champion of Art as opposed to Science—and, in that sense, perhaps the precursor to the movement against the scientific that, as described by Thomas Frank, reached the larger culture sometime during the Fifties.

Yet, the New Republic did not describe Joyce as a great opponent of science on the occasion of his death in 1941. On the contrary: the Irish author was, the magazine said, “the great research scientist of letters, handling words with the same freedom and originality [with which] Einstein handles mathematical symbols.” And the magazine by no means qualified the statement—if anything, it doubled down: “Literature as a pure art,” the article also claimed, “approaches the nature of pure science.” To many readers today, the comparison appears outlandish—as Fish’s work shows, many, many people believe today that Art and Science are different, that they possess, as the biologist Stephen Jay Gould once said about science and religion, “non-overlapping magisteria,” or differing realms of exploration and authority. But that is not what the editors of the New Republic thought.

Now, in one sense this is a perfect instance of exactly the point Stanley Fish has spent by far the majority of his adult life making: what he has called (in “Dorothy and the Tree: A Lesson in Epistemology,” from 2011) “the thesis that the things we see and the categories we place them in … are functions of ways of thinking that have their source in culture rather than nature.” To many of us today, the 1941-era version of The New Republic looks ridiculous: to those educated in the forty years between, say, 1960 and, perhaps, 2000 the idea that James Joyce and Albert Einstein were in the same line of work is, at the very least, risible and, at worst, a sign of incipient schizophrenia.

Yet as DeChambeau’s mention of George Washington and Albert Einstein in the same breath reveals, Fish’s argument is itself revisable: that is, if it is possible to think that, because our categories are based upon “culture,” and not something eternal about the universe, then our categories can be endlessly redrawn—both Einstein and Washington, one could say, were revolutionaries in their respective fields, or both spoke Indo-European languages, or both were white men, or both grew up influenced by a Judeo-Christian morality, or both believed that long white hair makes a man look distinguished. But if that is so, then it is also possible, as indeed it must be by the terms of Fish’s argument, to think that since the distinction between the categories is “cultural” and not intrinsic, then science and art are simply differing races of the same species. That is, if you really take Fish’s argument seriously, then it shouldn’t be possible—as apparently it is for many people today—to think of “culture” as superior to “nature.” Instead, you’d think more or less what, apparently, the long-ago editors of the New Republic thought: that the distinction between science and art is a trivial one.

There are, as it happens, such people today, though it takes some doing to find them. Or should I say, hunt them down. “To interpret tracks and signs,” argues Louis Liebenberg in his book, The Art of Tracking: The Origin of Science, “trackers must project themselves into the position of the animal in order to create a hypothetical explanation of what the animal was doing.” In this way, Liebenberg says, tracking is “not strictly empirical, since it also involves the tracker’s imagination”—which is to say that Liebenberg is perfectly willing to grant Stanley Fish’s point that science is, as Liebenberg says, “not only a product of objective observation of the world through sense perception” but “also a product of the human imagination.” But if that is so, then far from being a “cultural” phenomenon, science would be—just as it claims—ruled by natural, and not cultural, considerations.

Or to put it another way, the distinction would be beside the point: “Interpretation”—according to Richard Posner, a judge on the U.S. Court of Appeals for the Seventh Circuit—“is a natural human activity; it doesn’t require instruction.” Presumably, what Posner means is that, if interpretation exists, it must have come into existence somehow—and that, since it clearly has, it must have developed in a fashion not appreciably different than the biologically-based story Liebenberg lays out. To Fish, of course, such would mean something like the “capture” of the humanities by the sciences—but from the point of view of Liebenberg or Posner or the long-dead editors of the New Republic, there isn’t anything for the “sciences” to “capture”: both the sciences and the humanities are simply different facets of the same process.

What DeChambeau’s unself-consciousness about his desire for data, in turn, might mean then is that the world described by Frank—the world in which the sciences and the humanities are antagonistic to each other—may be about to turn. Nate Silver’s fivethirtyeight, for example, has pioneered data-driven political journalism, while—as the newscasters say—in sports the Boston Red Sox won a championship for the first time in decades on the strength of the application of statistical work influenced by the baseball “sabermetrician” Bill James. If it is true that, as some might like to say, the zeitgeist is changing, then that would likely have certain implications that, at the moment, are entirely unpredictable—at least, for those of us old enough to remember the twentieth century. But if they do, it will not matter much to me: as I say, the older I get the more I am convinced that the twentieth century—that century that began by doing such fantastic deeds as passing constitutional amendments like the one granting women the vote and ensuring the direct election of senators, and then slowly lost momentum throughout the latter half of its course—never happened.

Of course, we will get to wear shorts.

 

Advertisements

Smooth Criminal

[Kepler’s] greatest service to science was in impressing on men’s minds that … if they wished to improve astronomy … they were not to content themselves with inquiring whether one system of epicycles was better than another but that they were to sit down to the figures and find out what the … truth, was. He accomplished this by his incomparable energy and courage, blundering along in the most inconceivable way (to us), from one irrational hypothesis to another, until, after trying twenty-two of these, he fell, by the mere exhaustion of his invention, upon the orbit which a mind well furnished with the weapons of modern logic would have tried almost at the outset.
—C.S. Peirce. “The Fixation of Belief” Popular Science Monthly, Nov. 1877.

 

If MTV’s “Video Music Awards” are remembered in the future for any reason (and at this point, despite what I’m going to write, that’s pretty unlikely), it will be for one sentence: “Yo, Taylor, I’m really happy for you and I’mma let you finish, but Beyoncé had one of the best videos of all time.” The sentence was spoken at the 2009 edition of the awards show by the rap artist Kanye West, who objected to Taylor Swift’s win over Beyoncé in the (badly named) category of “Best Female Video.” Afterwards West was widely derided: even President Barack Obama, then in his first year in office, called him a “jackass.” But then, there was a backlash to the backlash. To Ann Powers of the Los Angeles Times, for example, the incident suggested hidden racism: “Maybe,” Powers wrote, West “was miffed that this young black pop queen’s heels were being nipped at by a blond Ivory Girl whose fans tend to look quite a bit like her.” What interests me about this affair is not however Ms. Powers’ judgment that what motivated West’s critics was racism (which I think was largely correct), but instead what it tells us, not about Obama’s presidency, but about why Hillary Clinton’s candidacy will fail.

Instead, what’s of interest about the affair now, at the end of Obama’s term, is how limited that judgement was: the actual crime West committed, I submit, wasn’t that he reminded his audience of the structural racism that prevents artists like Beyoncé from engaging in a fair contest with artists (?) like Swift, but instead simply that he was reminding people of the idea of fair contests: that somebody has to win and somebody has to lose. That crime, in turn, not only suggests just why it is that Barack Obama’s presidency will be judged by history as a failure, but also just why Hillary Clinton is having such difficulty securing the Democratic nomination: because Obama had, at least at the beginning of his presidency, a spectacularly wrong theory of both wisdom and democracy—and insofar as Clinton is attempting to follow his path, so does she. That fundamental problem with both “centrist” Democrats is that that Obama did not, and Clinton does not, acknowledge that wisdom is lumpy—as the greatest of their predecessors did.

Clinton is having difficulty, in other words, because her campaign is—as it must be— premised on a continuity with the Obama administration, and Obama’s administration was premised on a bad idea. And although Obama did express that bad idea in reaction to Kanye West’s clowning at the VMAs, it’s more readily visible during a seemingly far more serious event: namely, the widespread Republican victories during the 2010 midterm elections. In response to that event Obama said, in a statement meant to be conciliatory, that “no person, no party, has a monopoly on wisdom.” It’s an apparently innocuous statement; a sentiment with which, likely, most people might agree. Compromise, we are often pietistically informed, is what it means to inhabit a democratic government.

To say so however is, fundamentally, both to misunderstand not only wisdom but also democratic government—as the predecessor Obama recently acknowledged as the better president during his last State of the Union speech well knew. “It is common sense,” Franklin Delano Roosevelt said in a 1932 campaign speech, “to take a method and try it: If it fails, admit it frankly and try another.” The main thing, FDR said, was to keep going: “above all,” the great president said, “try something.” By contrast to Obama’s theory of wisdom—in which “everybody’s got some”—Franklin Delano Roosevelt’s theory of wisdom recognizes that, in the real world, whatever is thought to be “wise” is always, and always must be, relative—relative, that is, to some other policy or person. What that means is that whatever policy or person is chosen has, by definition, a “monopoly on wisdom.” Wisdom is lumpy because it is always engaged in a contest with other policies or people.

The point can be further explicated by an anecdote related by Senator Elizabeth Warren in her book, A Fighting Chance: “He teed it up this way: I had a choice. I could be an insider, or I could be an outsider,” Warren writes there about a 2009 meeting she had (some months before the VMAs) with Lawrence Summers, Obama’s economic adviser, concerning the new consumer advocacy bureau she was instrumental in constructing. “Insiders,” Summers told Warren, “get lots of access and a chance to push their ideas,” while “people on the inside don’t listen” to outsiders. The difference, Summers further informed Warren, is simple—“insiders also understand one unbreakable rule: They don’t criticize other insiders.” It’s a rule that sounds modest enough, even somewhat kindly; it even  sounds somewhat like the idea that motivates democracy: just because someone disagrees with you is no reason, as previous theories of government had it, to put their head on a pike and disinherit their children. But notice that Summers’ theory is also Obama’s theory of wisdom, and also what the president saw as what was wrong with what Kanye West did: c’mon, Summers is saying, all you have to do is quiet down a bit …

President Obama’s theory of wisdom then is wrong on both counts. “Wisdom” is a not a quality that everyone shares equally, but instead is defined by being a monopoly. In fact, to say that “no one has a monopoly on wisdom” is essentially not only to deny the existence of wisdom but also the very possibility of representative, democratic government. As a statement about the world it is just not true that “everyone has some wisdom”: it is just simply not so that on any given issue I have, say, thirty percent of the available wisdom and you have seventy percent. On the contrary: on nearly all questions it will be so that you have all of the wisdom and I have none, or that I have all the wisdom and you have none. Obama’s statement in short imagines that “wisdom” is “smooth”; i.e., that it is spread around. But in reality—as FDR’s statement says—“wisdom” is lumpy: in some, and likely most, cases, somebody is going to know the most about the relevant subject. True democracy is not about “sharing wisdom,” or some other sentimental (and ultimately corrupt) nonsense, but a machine for getting the relevant authority on the relevant subject in charge—and then getting the hell out of the way so that person or party can go to work.

In reality then, democratic government is entirely based on the proposition that one person or group does have a monopoly on wisdom, at least during the transitory moment of their term in office. (That is why one of the surest signs of despotism is that the leader is elected for life.) Restating the point doesn’t mean suggesting that we return to the medieval method of executing the losers—a method, it bears reminding, still being used in many places today—but it is to point out that this proposition is after all the entire point of having elections. As the (notoriously racist) Kentucky basketball coach Adolph Rupp said in 1965, “that’s why we play the game.” Obama won his election in 2008—nobody voted for him in order that he should bow down to the people who lost.

That in turn gets us back to Kanye. (Remember Kanye? There’s an essay about Kanye.) As I mentioned, Kanye’s real crime at the VMAs wasn’t that he reminded people of the likely biases that caused Swift to win over Beyoncé. (Indeed, Beyoncé ultimately won the award for best video of the year—of either gender.) Kanye’s real crime was that he offended against “decency” or “civility,” of the kind that Larry Summers, and at the end of the day Obama, represent. Kanye reminded people that there have to be winners and losers: or to put it another way, he committed the ultimate sin of a “celebrity” these days, the sin that Hillary Clinton is (wrongly) so desperate to avoid.

He reminded us of reality.

Art Will Not Save You—And Neither Will Stanley

 

But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 

 

Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.