Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

Advertisements

Great Lengths

‘A first class hole must have the subtleties and strategic problems which are difficult to understand, and are therefore extremely likely to be condemned at first sight even by the best of players.’
Alister MacKenzieThe Spirit of St. Andrews (1933; pub. 1995)

Both men were over two hundred yards from the hole when we arrived at their golf balls, far to the left side of Streamsong Red’s thirteenth. My player, though not as skilled a golfer as his companion, was slightly closer to the green; the other player was further away. His caddie counseled him to take a long club, and play up to the right of the dune fronting the thirteenth’s green. The man did, hitting a heroic shot that flew over the center fairway bunker, to the right of the dune. It left him with a short wedge into the green, only partially obscured by the massive dune. My player looked at me, presumably expecting me to counsel similarly. But while I told the other player, “good shot,” I was handing my guy a wedge.

My reasoning, had there been time to follow it at length, had much to do with a golf course nearly three thousand miles away: Riviera Country Club, outside Los Angeles. The thirteenth hole on Streamsong’s Red Course draws from that golf course on two distinct levels: in the first place, it is a short par five, designed to follow the long par four twelfth—a rehash of a trick the Coore and Crenshaw team had already used on the first and second hole of the same course: a short par five following a par four of nearly the same length. The artifice is inspired by the opening holes of Riviera, a course that begins with one of the easiest par fives in golf and is followed by one of the most difficult par fours. But the Red Course, and specifically the thirteenth, also draws much from the thought of Riviera’s architect, George Thomas.

“Each hole at Riviera,” reads the course’s review at the website, Golf Club Atlas, is a ‘how to’ of golf architecture.” One of these is the contrast between the first and the second holes: one of the easier par fives on tour (often not even requiring a driver to reach in two shots) followed by the course’s number one handicap hole. The idea is a kind of rhyme, where what happened on the previous hole matters in a way not often found in less sophisticated designs.

One way the first two holes at Riviera rhyme, for example, is by contrast of their greens: the first hole’s green is very wide, yet not very deep, while the second’s is the opposite. Hence, the one mitigates a shot that is the correct distance but is indifferently aimed, while the second mitigates the opposite kind of shot. Conversely, each also punishes the “wrong” sort of shot—the sort that might have been just the thing on the previous hole. It’s a subtle but far-reaching effect, one that can be hard to detect—unless you happen to read the scorecard.

A careful reading of any course’s scorecard can, in other words, reveal holes of extremely similar distances; the lesson Coore and Crenshaw, following Thomas, would impart is: “Pay attention when two holes of similar lengths have different par values.” The numbers are a clear signal to the careful golfer, because the choice of length is not haphazard; it is a sign that those two holes have a relation to each other. In the case of the thirteenth and the twelfth on Streamsong’s Red, each is—in part—a funhouse version of the other. Where one is downhill (the 12th) the other is uphill (the 13th), and where one offers a clear view of the green the other obscures it. But the dune of the thirteenth is not just a mirror; it is a razor.

It’s a razor because the thirteenth on the Red Course embodies George Thomas’ thought in an even more subtle sense. “The spirit of golf,” Thomas wrote in his Golf Architecture in America, of 1927, “is to dare a hazard, and by negotiating it reap a reward, while he who fears or declines the issue of the carry, has a longer or harder shot for his second.” Everything in golf revolves around that axis mundi; it is the turtle upon which the disc of the world, as the recently-deceased Terry Pratchett might have appreciated, rests. Proceed by one path, and others become unavailable—every choice, like Borges’ “Garden of Forking Paths,” is determined by previous choices.

One way the thirteenth does this is by separating the golfer from a clear view of the green until he nearly stands upon it. But it does not do that entirely: from the extreme left it’s possible to see the flag, if not the green itself. The trouble—and of course, as George Thomas’ maxim advertises, there is a trouble—is that, from the left, a player must traverse nearly a hundred yards of sand; not so from the right, where a smooth road of fairway grass chases gently to the green. The architecture appears to be designed, in Thomas’ sense, to reward a “spirited carry” over the dune.

Some version of that thought, presumably, is why my colleague counseled his player to play up the right side with the strong shot he hit. Yet two wedge shots of just more than a hundred yards would easily reach the green—a shot that even the worst golfer can usually manage. So, why have a player choose a club far more easily mishit, like a long iron, to a target that grants only a modest advantage? I didn’t ask the other caddie for his rationale, but I’d presume it has something to do with the conventions of golf, at least as played by Americans in the early 21st century—conventions that seem to ignore the second part of George Thomas’ remarks about the “spirit of golf.”

That second part is this: “yet the player who avoids the unwise effort gains an advantage over one who tries for more than in him lies and fails.” In other words the player who can pull off a difficult shot should get the edge over the player who can’t—but the player who knows his own game ought to get the edge over the player does not. In that sense, the thirteenth’s “spirited carry” over the dune rewards, as it should, the player with a possible eagle—but as few seem to realize, it does not reward a heroic second shot that does not finish on the green. In fact, it positively threatens the player who makes that choice.

Just out of sight from the fairway, concealed from anyone standing at a distance from the green, about eighty yards short and to the right of the green, Coore and Crenshaw dug a deep bunker that threatens any ball hit past the beginning of the tall dune, but not onto the green itself. In other words, to try to hit a long shot that does not attempt the green risks sticking the struck ball in that bunker. Needless to say, it is a difficult recovery that more or less takes par—and certainly birdie—off the table. The player who know he cannot carry the dune, and lays up in front of the dune, has a much easier time of it than the golfer who hits a long second shot that does not reach the green.

The answer for most American golfers, I’d say, is to hit it as far as possible anyway—even if there isn’t a reward at the other end. But that is the ruse of the Red’s thirteenth: sometimes it’s actually more “daring” to decline the dare. It may be worth noting that Thomas himself, at least as ventriloquized by the golf writer Geoff Shackelford, was rather pessimistic about that possibility of such a lesson ever being learned: “I sense that that the combination of technology, refined conditioning, the aerial game and the overall curiousity with fairness have combined to eliminate strategy,” says “Thomas” in an interview published in Golf Club Atlas, and these are signs, the great Californian concludes, of “a society willing to go to great lengths to avoid thought.” This may yet be unfair, however: the existence of the thirteenth at Streamsong’s Red is an argument to the contrary.

Excellent Foppery

IMG_1766

That sir which serves and seeks for gain,
And follows but for form,
Will pack when it begins to rain,
And leave thee in the storm
King Lear II.iv

 

We’d been in the badly-lit cart barn for over an hour, as the storm came ashore from the Gulf of Mexico, when my fellow caddie Pistol discovered the scorecard that had been resting on the steering wheel of the cart he was in. The card recorded the events of the first two holes played by a foursome on Streamsong’s Red course, and told a tale of much woe: the foursome had played the first hole in an eye-gouging fourteen over par. Five of those over-par strokes came from one poor wretch’s nine. “The fact,” Pistol laconically observed, “that the guy wrote down the nine means it probably wasn’t his first this month.” Still, he’d written down the nine, for some mysterious reason—but why? Something I had read recently suggested not only existential despair, but also that the answer might have to do with slot machines and Australian beards.

According to a recent study of styles of men’s facial hair—as revealed by newspaper photographs going back more than a century by two researchers from the University of New South Wales—there is no one “right”method of wearing facial hair. Instead, what’s fashionable in beard styles is simply something they call “negative frequency dependence,” which just means that whatever the desirable style of the day is will simply be determined by what’s rare, not because of something internal to the style itself. “Patterns of facial hair enjoy greater attractiveness when rare than when they’re common,” the researchers found. Which, I’d grant you, hardly seems earthshaking, nor does it appear to have much to do with golf.

Bear with me though, as we try to answer the question of why anyone would habitually write down their nines. “True” golfers, of course, will harumph at the question itself. “Golf is like solitaire,” Tony Lema once said: “When you cheat, you only cheat yourself.” Yet given the scores of the other players in the foursome, nines were not unfamiliar to the group—in that case, however, why continue to play? Why not either improve or … just cease to keep score? Continuing, year after year, decade after decade, to play golf poorly seems like one of those mysteries of the human race that alien archeologists will one day wonder over.

As Bill Pennington of the New York Times reported in 2005, the “average 18-hole score for the average golfer remains at about 100, as it has for decades, according to the National Golf Foundation.” This, despite the millions spent on game improvement technology like titanium woods and over-engineered golf balls; technology often researched by (former) rocket scientists who’ve left the NASA or the defense industry in order to find an extra four yards from your seven-iron. Yet, despite the money spent, the fact that this quest has largely been fruitless is just accepted: “Maybe we’re all supposed to stink at this,” says the revered commentator David Feherty in Pennington’s story.

Yet Feherty’s line explains nothing, just as—the American philosopher Richard Rorty liked to point out—the doctor in the Moliere play’s claim that opium put people to sleep because it had a “dormitive power” explained nothing. Recently however I came across an article that just might explain something about this gap between the billions spent and the apparent lack of result: a piece by one Professor Ian Bogost, of Georgia Tech, in The Baffler about a seemingly unrelated subject—the rise of “social media” games like FarmVille or Candy Crush. What Bogost suggests is that such games have a lot to do with that perennial stalwart of the Las Vegas economy: slot machines.

Citing the work of psychologists Geoffrey and Elizabeth Loftus, Bogost tells us that slot machines exploit “a type of operant conditioning that provides a reward intermittently,” or “partial reinforcement.” In other words, precisely the mechanism that B.F. Skinner explored in his behaviorist experiments with rats: so long as, once in what can be a very great while, a reward gets doled out, there’s virtually no end to which mammals will not go. As the subject of the recent short film, Lapse: Confessions of a Slot Machine Junkie, says about his time in front of the machines, slot machine zombies that sit in front of their spinning fruit in the casinos that have sprung up across America in recent decades are “Irrational, stupid, like a little rat in a wheel.” But slot machine junkies continue on with their behavior even though many of them realize how absurd their behavior is.

For Bogost, that explains the appeal of video games like FarmVille: they “normalize corrupt business practices in the guise of entertainment.” Games like these are called “free-to-play,” which means that they’re free to begin to play: the real point of them, however, is to “give users opportunities to purchase virtual items or add-ons like clothing, hairstyles, or pets for their in-game characters.” Or simply the opportunity to continue to play: like their forebears in the video arcades, these games are often designed so that at a certain point a player must either wait some time before playing again, or send out “invites” to the social media friends, or simply throw down some amount of money to continue to play right then and there. As Bogost puts it, “FarmVille users might have been having fun in the moment, but before long, they would look up to discover they owed their souls to the company store.”

What that would seem to say is that the man taking a nine—and not thinking it extraordinary—is playing golf for the few moments of pleasure the game affords him, and ignoring the rest: remembering the fifty-foot putt that dropped, and not the seven shots that preceded it. Or the solid nine-iron from the fairway that somehow stopped next to the cup—and not the sliced drive into the woods, followed by the three chip shots that restored him to the fairway, that led to the moment. It would be a species of what’s often called “selective memory,” which is something that we all think we are familiar with on a conversational level these days. But the more sobering idea to arise from Bogust’s piece isn’t that people ignore evidence that doesn’t suit them—but that golf exists, not in spite of, but because of the intermittent rewards it spits out.

What the idea of “partial reinforcement” suggests is that—seemingly paradoxically—if the casinos rejiggered the slots to pay out more often, that would lead to less play rather than more. The slot machine zombies aren’t there for the payoff, but—it could be said—for the long stretches between payoffs. In the same way, it may be that the golfer isn’t there for the brilliant shots, but for the series of shots between the fantastic ones: if golfers were better, in other words, the game would not have as much appeal. Just as the slot machine player, deep down, doesn’t want to win—or rather, wants to win just enough times to maintain the illusion that he’s playing to win—so the golfer doesn’t want to get better. In that sense, then, what all the money spent on researching the latest hot ball or driver is being spent on is creating that illusion for the golfer: the illusion that he really does want to get better—when in fact he does not.

It’s about here—in the midst of a rather dark picture not merely of golf, but human beings generally—where the beards come back in. What the foregoing suggests, after all, is that the reason people continue to play golf badly is precisely because of the rarity of good shots—just as people, according to the Australians, are attracted to certain beard styles because of their rarity, not because of anything intrinsic to the styles themselves. The appeal of the idea, at least when it comes to golf, is that it explains just why people would rather spend money on expensive golf clubs, rather than something that would actually improve their games in a lasting way: namely, lessons from a certified professional golfer. So long, in other words, as a person is able to hit the occasional good shot—which, strictly in terms of chance, he or she is bound to do once in a while—it does not particularly matter that all the rest of the time he or she is hitting terrible ones.

Purchasing expensive equipment then could be thought of in two different ways: the first is that it’s the same kind of shortcut as, say, taking speed can temporarily help with weight loss. Just as practice is the only real way to get better, so is diet and exercise the only real way to improve your body. But just as a “magic” pill can cause a temporary weight loss without effort (even if it’s all gained back later), so can a new driver or irons cause a minor improvement from your older clubs. Since getting a new club requires only money, whereas lessons and practice requires time, it’s easy to see why people would go for that kind of fix.

Yet, that’s not the only possible interpretation here: there’s a darker one suggested by the investigation into beards. Remember, no kind of beard is intrinsically better than any other kind—which is to say, there’s no way to investigate beards rationally and discover a single “best” kind. If golf is more like that, rather than the kind of thing that can be worked at, then buying a new golf club is, in this scenario, not so much a means of improvement (even if it’s known to be the same kind of shortcut as, say, taking speed can temporarily help with weight loss) but instead a kind of offering to the gods of rationality itself. That is, buying a golf club is like burning a goat (or, say, your daughter if you have a pressing need to get to Troy and the winds are not cooperating): it’s a way to simultaneously a recognition that golf is largely a matter of change (at least in your own case) and also an attempt to influence that hand of fate. What is disturbing about this, to be sure, is the whiff of primitivism about it—the suggestion that the Enlightenment is merely a passing moment in the history of humanity, and that the return of the Dark is merely just beneath the surface, or a turn around the corner.

****

The storm outside our cartbarn continued. The crowd within it slowly dwindled, as the golfers, slowly and then at once, gave up hope of completing their rounds. Their caddies followed. As they day drew drearily on, and the moisture from the Gulf of Mexico syncopated upon the just and the unjust alike, there were only a few of us left. Pistol remained. “What else,” he remarked in the midst of a long silence that was only broken by the occasional crash of thunder, “have I to do?”
“Grow a beard?” said a voice somewhere in the echoing darkness.
The course closed for the day shortly thereafter.

Is Streamsong Real?

“Young man, the Soviet Union is our adversary. Our enemy is the Navy.”
    —General Curtis Le May

Just finding Streamsong, the new golf resort ballyhooed as the “Bandon Dunes of Florida,” is an exercise in navigation: miles from any interstate highway, it’s surrounded by what appears, alternately, to be the savannah of the Serengeti Plain or an apocalyptic post-industrial hellscape. Either a lion pack or Mad Max appear likely to wait around the next turn. It’s a Florida unknown to the tourists on either coast—but Streamsong exists where the real map of Florida is being drawn, where the real history of the state is being written. That, even if one of Florida’s major exports is a denial that history exists, and the resort’s operations may in one sense dispute the very idea of maps.

Streamsong is located in the central part of Florida, far from the tourist beaches; there are no other big-time golf courses in the area. It consists, so far, of two 18-hole golf courses, the Red and the Blue. The Red was designed by Tom Doak’s Renaissance Design team, and the Blue by the partnership of Bill Coore and Ben Crenshaw, the Masters winner who is a connoisseur of golf course design. Both teams are grouped together as part of golf’s “minimalist” design movement; according to Renaissance Design, the “minimalist’s objective is to route as many holes as possible whose main features already exist in the landscape.” The landscape at Streamsong, however, that faced the two architectural teams was by no means natural.

This part of Florida is the preserve of enormous cattle ranches and massive phosphate mining operations. They’re industries that don’t often make it into the tourist brochures. Yet as dependent as Florida is on tourism—and at least some of it is definitely golf-related—Streamsong is the result of changes in the second of those industries. And, as it happens, it’s mining that’s at the center of a debate over the future of the state itself, as reported in the Tampa Bay Times in 2010.

Phosphate mining is, according to the director of the Tampa Port Authority Richard Wainio, “a singular industry … Florida doesn’t have a lot of big industries, and this is at or near the top of the pile as far as economic benefit for the state.” The phosphate industry, which ships its product through Tampa Bay, is in other words the economic machinery that the gloss of Disney World and South Beach obscures. Most of the state’s visitors, and likely by far the majority of its citizens, have little notion of what phosphate mining is nor how it can affect their lives. A little backstory might be in order then.

It begins somewhere around 50 million years ago, during the Eocene era—when the piece of Africa that would become Florida broke away from its parent plate and attached itself to the North American plate during the event that shattered the super-continent Pangea. In the eons since, shallow seas rose and fell over the rock, depositing the fossils that, when they were discovered in the 19th century, led to the central part of the state to be called “Bone Valley.” Animal bones and teeth concentrate phosphorus, as does the existence of animal life generally: phosphorus contains a lot of energy within its chemical bonds, which makes it necessary for nearly all life on earth—and thus, valuable.

“Bone Valley” is drained by the Peace River, which rises near the town of Bartow, the nearest larg(ish) town to Streamsong. A report by the U.S. Army Corps of Engineers on the river—done because the Corps manages the slow-flowing “river of grass” called the Everglades—not long ago held that “phosphate mining had led to the loss of 343 miles of streams and 136,000 acres of wetlands in the Peace River region.” That finding was a major piece of the evidence introduced by the enemies of phosphate mining in their lawsuit.

The largest company to mine phosphates in the Bone Valley is a company called Mosaic, a behemoth corporation formed out from a merger of two predecessors: IMC Global and the crop nutrition department of Cargill, each of them massive companies in their own right. Mosaic “is the largest producer of finished phosphate products, with an annual capacity greater than the next two producers combined.” If any one company has contributed to the degradation of the Peace River, then Mosaic—whose corporate forebears have operated in the Peace River watershed since before 1909—is the primary suspect. And Mosaic is, also, the owner of Streamsong—despite being such a large company, the resort is the company’s first foray into golf, or anything like tourism at all.

It’s an odd kind of timing, of course, since the numbers of golf courses in the United States are declining, not rising these days. Golf is an industry that took a major hit during the recent economic troubles: “Over the past decade,” said the New York Times in 2008, “the leisure activity most closely associated with corporate success in America has been in a kind of recession.” Nevertheless, Mosaic went ahead and built two courses by top-name design teams at just the time many courses in the United States were shutting down. Just what that timing may, or may not, have to do with a lawsuit filed in 2010 by environmental groups, including the Sierra Club, seeking to limit phosphate mining is unclear.

If building Streamsong is a tactical exercise meant to further a long-term corporate goal—and there’s no knowing at the moment if it is—then it’s well-within a Florida tradition of commercial strategy. European intellectuals, for instance, have long noted that Florida is, perhaps even more than California, known as a place with a tenuous connection to reality: the homeland of what the sophisticated Europeans call “hyperreality,” a place where signs no longer refer to an external “reality.” Where, in fact, the difference between signs and their referents no longer exists.

One such thinker, the Frenchman Jean Baudrillard, conjured up the Argentinian writer Jorge Luis Borges’ fable, “On Rigor In Science,” to describe Disneyland. Borges’ short, one-paragraph tale describes an imperial society so wedded to precision that nothing less than “a Map of the Empire whose size was that of the Empire” would do. In such a place, the difference between a place and its representation would break down; so too, Baudrillard argues, are the Disney parks “perfect model[s] of all the entangled orders of simulation.” Another such Florida place, which as it happens was the starting point for my own trip to Streamsong, was that seemingly-dull “retirement community” (as they’re called), “the Villages.”

According to one resident, the Villages are “one of the places the Spanish looked for the Fountain of Youth.” But where Ponce de Leon left empty-handed, the new residents of the place are more fortunate: “‘we found it!’” Just how the Villages found this “Fountain of Youth” is something that the Mosaic Company might do well to examine. Assuming, to be sure, that it hasn’t already.

The real history of the Villages is that they began as a way to sell Florida swampland in the Lady Lake area of the state when the previous way of selling it—mail order—was outlawed by federal law in 1968. (Because it lent itself to fraud so easily, obviously.) Partners Harold Schwartz—significantly, a former Chicago advertising executive—and Al Tarrson’s attempts to develop the land as a mobile home park throughout the 1970s largely failed until in 1983 Schwartz bought out Tarrson and brought his son, H. Gary Morse (also a Chicago ad man), on to run the company. Morse’s idea was to re-aim their company towards a higher-income bracket than potential mobile-home owners; the master-stroke was building a golf course and not charging greens fees to play it. Tens of thousands of residents followed.

That isn’t, obviously, the history that the resident who talks about Ponce de Leon refers to when he mentions the Fountain of Youth. THAT history, it seems, comes from another source: according to a story from the St. Petersburg Times in 2000, “the Morse family (with the help of a bottle of Scotch and a case of beer) concocted a ‘fanciful history’” of the Villages; complete, in fact, with a reference to a tale of a visit from Ponce de Leon himself. The reason for this fabricated history is simple enough: as Gary Morse himself told the St. Petersburg Times reporter, “We wanted a town to remind them of their youth.”

Yet while the original “town center” development in the Villages—“Spanish Springs”— began the idea concocting “history” out of whole cloth, it’s the newest,—“Lake Sumter Landing”—that sails to a farther shore. “It features,” one Timothy Burke, a student at the University of South Florida notes in his paper, “An Economy of Historicity: The Carefully-Crafted Heritage of the Villages,” “no fewer than 76 ‘historic’ locations”—despite the fact that many of these sites “hadn’t existed six months prior.” Nearly every shop in the shopping district has a plaque adverting the building’s antiquity, complete with some tale or other of a previous tenant or notable: as Umberto Eco, author of “Travels in Hyperreality,” might say, Lake Sumter Landing “blends the reality of trade with the play of fiction.” So, the local movie theater not only claims to be an old vaudeville palace, it asserts that a traveling magician once “threw a playing card from the stage at the ceiling of the theater so hard that the card lodged in a crack in the plaster—where it remains to this day.” The top? Yeah, we’re over it.

Still, the idea behind the plaques isn’t just for entertainment value. Reading these plaques, nearly all of which refer to how the “original” inhabitants of the place arrived there from somewhere else—as, perhaps not coincidentally, do the current residents. It’s one way that, as Burke says, “the stories contributed to their adaptation of the Villages as a ‘home’”: the fictional characters described in the fictional histories inevitably come from places like Maine or New York, not Alabama or Tennessee. So, for instance, the fictional Upton family, proprietors of the eponymous Feed and Tack Store—“now” a restaurant—came to Lake Sumter from Pennsylvania. Almost certainly, the meaning of these varied origins is meant to reflect the varied origins of the current residents: the former Nebraska businessmen or Cleveland dentists who chose to spend the rest of their lives there. The “fanciful history,” in other words, allows each new resident to imagine themselves already having “roots” in what is, in reality, a landscape almost wholly ignorant of what actually preceded it.

Burke interviews one resident, for instance, about the fictional history, and asks whether “she felt there was an authentic heritage to the Lady Lake area that was being overlooked” by the fictional history of Spanish Springs and Lake Sumter Landing. “‘Oh,’” the former New Jersey schoolteacher says, “‘but this is Florida. It probably wasn’t the nicest history.’” Perhaps so: actual local historians, Burke says, report that before the “northern invasion” of the Chicago advertising executives, “the Lady Lake area was ruled by cattle baron Clyde Bailey”—who, given the history of the cattle industry in America, was presumably not a Boy Scout.

Assuming though that we can juggle the distinction between “real” and “fake” on top of “nice” and “not nice”—a pretty complex mental operation—maybe we can presume that—though the “fake” history of Lake Sumter Landing is likely “nicer” than the “real” history of Clyde Bailey’s Lady Lake—it doesn’t necessarily mean that the real history of the Villages is all that much different from that of Lady Lake. Like the old-time robber barons of a company town, Gary Morse owns “all or part of pretty much everything worth owning in the Villages, including the bank, the hospital, the utilities, the garbage collection company, the TV and radio stations, and the newspaper,” according to a story in Slate. But not merely that—which is what got Morse in trouble with the IRS recently.

This summer, the IRS ruled that government bonds issued by the Villages’ governing board—called a community development district, or CDD—“did not deserve to be tax-exempt” like other bonds issued by CDDs throughout Florida. Why? “Because,” as Slate said, “everyone who sits on the district board—like everything else in the Villages—is controlled by Morse.” Or as the New York Times reported: “the IRS states that the district does not function like a true government.” An actual government, for example, is usually worried about what its voters might think about how that government spends its money.

That’s why IRS agent Dominick Servadio questioned “why the Village Center Community District used $60 million in bond proceeds to buy guardhouses, golf courses, and small parks that cost Mr. Morse … less than $8 million to build,” according to the Times. “‘If I was a resident of The Villages,’” Mr. Servadio wrote, “‘I would be outraged by this transaction.’” The Villages, it seems, has responded by saying that Mr. Servadio is not nice: “‘It’s really been upsetting the residents,’” the Times quotes Janet Tutt, district manager for the Villages, “‘to deal with the stress and anxiety.’” One imagines that yes, there is likely some stress involved when discovering that one’s government has been swindled for a 700 percent profit—but just where that blame lies is perhaps not so clear-cut as Ms. Tutt might say.

None of this, to be sure, has anything directly to do with Streamsong which, so far as I know, does not pretend to have always been there. It is true that a golf course—particularly one built in Florida, which was unaffected by the Ice Ages—is always a kind of fakery, because despite what Tom Doak might claim no golf course simply takes the land it’s built on as is: “All over the world,” says geologist Anita Harris in John McPhee’s Annals of the Former World, “when people make golf courses they are copying glacial landscapes.” Yet fairly obviously, the resort wasn’t built simply because the company felt that its land demanded a golf course to be put upon it, in the way that some say that the sea by Monterey, California demanded Pebble Beach be built. Almost certainly, the company expects some return for its investment: a return that may or may not have any reference to the Sierra Club’s lawsuit.

Yet even were there some “plot” involved in the building of Streamsong, the judgement of whether it actually signifies something “nefarious” or not ultimately comes down to what value you place on phosphate mining generally. As it happens, phosphates are part of all living things: it’s an essential nutrient for plants, for instance, and necessary for nearly all metabolic processes in animals. Phosphates also allow muscles to store energy for immediate use, and they build our teeth and bones. This is not even to address industrial uses—without phosphate mining, in short, a great deal of the contemporary world, “natural” and “artificial,” falls apart.

Countering those points, the Sierra Club notes what opponents of mining always note: that the benefits of mining rarely accrue to those living near the site of the mine. Sixty percent of the ore shipped out of Florida, for example, leaves the United States—historically, mostly to China—and while the mining industry provides some jobs, those numbers are dwarfed by the numbers of jobs in Florida that depend on a clean Peace River watershed, including the hundreds of thousands that drink Peace River water. As with nearly all mining operations, phosphate mining leaves behind it a cleanup trail—and in the case of Florida, that includes small amounts of radioactive uranium that will likely outlast even the corporations that do the mining, much less any of us human beings alive today.

To which Jean Baudrillard, for one, might reply “Just so.” Already, in 1975, the French intellectual had published “Simulacra and Simulations,” which argued that, today, the distinction between the Real and the Imaginary had fallen: in his words, the “territory no longer precedes the map.” “Disneyland,” he says, “is presented as imaginary in order to make us believe that the rest is real, when in fact Los Angeles and the America surrounding it are no longer real.” Or, to put it in a way that might be more applicable to those residents of the Villages who appear quite ready to believe that the place was built by Santa Claus, Disneyland “is meant to be an infantile world, in order to make us believe the adults are elsewhere, in the ‘real’ world, and to conceal the fact that childishness is everywhere.” Is Streamsong a cover for iniquitous business practices, or an attempt at an “enlightened” capitalism that recognizes the (alas, completely necessary) damage it does?

Or, to put it another way: Is Streamsong Real?

The End of Golf?

And found no end, in wandering mazes lost.
Paradise Lost, Book II, 561

What are sports, anyway, at their best, but stories played out in real time?
Grantland “Home Fields” Charles P. Pierce

We were approaching our tee shots down the first fairway at Chechessee Creek Golf Club, where I am wintering this year, when I got asked the question that, I suppose, will only be asked more and more often. As I got closer to the first ball I readied my laser rangefinder—the one that Butler National Golf Club, outside of Chicago, finally required me to get. The question was this: “Why doesn’t the PGA Tour allow rangefinders in competition?” My response was this, and it was nearly immediate: “Because that’s not golf.” That’s an answer that, perhaps, appeared clearer a few weeks ago, before the United States Golf Association announced a change to the Rules of Golf in conjunction with the Royal and Ancient of St. Andrews. It’s still clear, I think—as long as you’ll tolerate a side-trip through both baseball and, for hilarity’s sake, John Milton.

Throughout the rest of this year, any player in a tournament conducted under the Rules of Golf would be subjected to disqualification should she or he take out their cell phone during a round to consult a radar map of incoming weather. But on the coming of the New Year, that will be permitted: as the Irish Times wonders, “Will the sight of a player bending down to pull out a tuft of grass and throwing skywards to find out the direction of the wind be a thing of the past?” Perhaps not, but the new decision certainly says where the wind is blowing in Far Hills. Technology is coming to golf, as, it seems, to everything.

At some point, and it isn’t likely that far away, all relevant information will likely be available to a player in real time: wind direction, elevation, humidity, and, you know, yardage. The question will be, is that still golf? When the technology becomes robust enough, will the game be simply a matter of executing shots, as if all the great courses of the world were simply your local driving range? If so, it’s hard to imagine the game in the same way: to me, at least, part of the satisfaction of playing isn’t just hitting a shot well, it’s hitting the correct shot—not just flushing the ball on the sweet spot, but seeing it fly (or run) up toward the pin. If everyone is hitting the correct club every time, does the game become simply a repetitive exercise to see whose tempo is particularly “on” that day?

Amateur golfers think golf is about hitting shots, professionals know that golf is selecting what shots to hit. One of the great battles of golf, to my mind, is the contest of the excellent ball-striker vs. the canny veteran. Bobby Jones vs. Walter Hagen, to those of you who know your golf history: since Jones was perhaps known for the purity of his hits while Hagen, like Seve Ballesteros, for his ability to recover from his impure ones. Or we can generalize the point and say golf is a contest between ballstriking and craftiness. If that contest goes, does the game go with it?

That thought would go like this: golf is a contest because Bobby Jones’ ability to hit every shot purely is balanced by Walter Hagen’s ability to hit every shot correctly. That is, Jones might hit every shot flush, but he might not hit the right club; while Hagen might not hit every shot flush, but he will hit the correct club, or to the correct side of the green or fairway, or the like. But if Jones can get the perfection of information that will allow him to hit the correct club more often, that might be a fatal advantage—paradoxically ending the game entirely because golf becomes simply an exercise in who has the better reflexes. The idea is similar to the way in which a larger pitching mound became, in the late 1960s, such an advantage for pitchers that hitting went into a tailspin; in 1968 Bob Gibson became close to unhittable, issuing 268 strikeouts and possessing a 1.12 ERA.

As it happens, baseball is (once again) wrestling with questions very like these at the moment. It’s fairly well-known at this point that the major leagues have developed a system called PITCH/fx, which is capable of tracking every pitch thrown in every game throughout the season—yet still, that system can’t replace human umpires. “Even an automated strike zone,” wrote Ben Lindbergh in the online sports magazine Grantland recently, “would have to have a human element.” That’s for two reasons. One is the more-or-less obvious one that, while an automated system has no trouble judging whether a pitch is over the plate or not (“inside” or “outside”) it has no end of trouble judging whether a pitch is “high” or “low.” That’s because the strike zone is judged not only by each batter’s height, but also by batting stance: two players who are the same height can still have different strike zones because one might crouch more than another, for instance.

There is, however, a perhaps-more rooted reason why umpires will likely never be replaced: while it’s true that major league baseball’s PITCH/fx can judge nearly every pitch in every game, every once in (a very great) while the system just flat out doesn’t “see” a pitch. It doesn’t even register that a ball was thrown. So all the people calling for “robot umpires” (it’s a hashtag on Twitter now) are, in the words of Dan Brooks of Brooks Baseball (as reported by Lindbergh), “willing to accept a much smaller amount of inexplicable error in exchange for a larger amount of explicable error.” In other words, while the great majority of pitches would likely be called more accurately, it’s also so that the mistakes made by such a system would be a lot more catastrophic than mistakes made by human umpires. Imagine, say, Zack Greinke was pitching a perfect game—and the umpire just didn’t see a pitch.

These are, however, technical issues regarding mechanical aids, not quite the existential issues of the existence of what we might term a perfectly transparent market. Yet they demonstrate just how difficult such a state would, in practical terms, be to achieve: like arguing whether communism or capitalism are better in their pure state, maybe this is an argument that will never become anything more than a hypothetical for a classroom. The exercise however, like seminar exercises are meant to, illuminates something about the object in question: since a computer doesn’t know the difference between the first pitch of April and the last pitch of the World Series’ last game—and we do—that I think tells us something about what we value about both baseball and golf.

Which is what brings up Milton, since the obvious (ha!) lesson here could be the one that Stanley Fish, the great explicator of John Milton, says is the lesson of Milton’s Paradise Lost: “I know that you rely upon your senses for your apprehension of reality, but they are unreliable and hopelessly limited.” Fish’s point refers to a moment in Book III, when Milton is describing how Satan lands upon the sun:

There lands the Fiend, a spot like which perhaps
Astronomer in the Sun’s lucent Orb
Through his glaz’d optic Tube yet never saw.

Milton compares Satan’s arrival on the sun to the sunspots that Galileo (whom Milton had met) witnessed through his telescope—at least, that is what the first part of the thought appears to imply. The last three words, however—yet never saw—rip away that certainty: the comparison that Milton carefully sets up between Satan’s landing and sunspots he then tells the reader is, actually, nothing like what happened.

The pro-robot crowd might see this as a point in favor of robots, to be sure—why trust the senses of an umpire? But what Fish, and Milton, would say is quite the contrary: Galileo’s telescope “represents the furthest extension of human perception, and that is not enough.” In other words, no matter how far you pursue a technological fix (i.e., robots), you will still end up with more or less the problems you had before, only they might be more troublesome than the ones you have now. And pretty obviously, a system that was entirely flawless for every pitch of the regular season—which encompasses, remember, thousands of games just at the major league level, not even to mention the number of individual pitches thrown—and then just didn’t see a strike three that (would have) ended a Game 7 is not acceptable. That’s not really what I meant by “not golf” though.

What I meant might best be explained by reference to (surprise, heh) Fish’s first major book, the one that made his reputation: Surprised by Sin: The Reader in Paradise Lost. That book set out to hurdle what had seemed to be an unbridgeable divide, one that had existed for nearly two centuries at least: a divide between those who read the poem (Paradise Lost, that is) as being, as Milton asked them, intended to “justify the ways of God to men,” and those who claimed, with William Blake, that Milton was “of the Devil’s party without knowing it.” Fish’s argument was quite ingenious, which was in essence was that Milton’s technique was true to his intention, but that, misunderstood, could easily explain how some could mis-read him so badly. Which is rather broad, to be sure—as in most things, the Devil is in the details.

What Fish argued was that Paradise Lost could be read as one (very) long instance of what are now called “garden path” sentences, which are grammatical sentences that begin in a way that appear to direct the reader toward one interpretation, only to reveal their true meaning at the end. Very often, they require the reader to go back and reread the sentence, such as in the sentence, “Time flies like an arrow; fruit flies like a banana.” Another example is Emo Philips’ line “I like going to the park and watching the children run around because they don’t know I’m using blanks.” They’re sentences, in other words, where the structure implies one interpretation at the beginning, only to have that interpretation snatched away by the sentence’s end.

Fish argued that Paradise Lost was, in fact, full of these moments—and, more significantly, that they were there because Milton put them there. One example Fish uses is just that bit from Book III, where Satan gets compared, in detail, with the latest developments in solar astronomy—until Milton jerks the rug out with the words “yet never saw.” Satan’s landing is just like a sunspot, in other words … except it isn’t. As Fish says,

in the first line two focal points (spot and fiend) are offered the reader who sets them side by side in his mind … [and] a scene is formed, strengthened by the implied equality of spot and fiend; indeed the physicality of the impression is so persuasive that the reader is led to join the astronomer and looks with him through a reassuringly specific telescope (‘glaz’d optic Tube) to see—nothing at all (‘yet never saw’).

The effect is a more-elaborate version of that of sentences like “The old man the boats” or “We painted the wall with cracks”—typical examples of garden-path sentences. Yet why would Milton go to the trouble of constructing the simile if, in reality, the things being compared are nothing alike? It’s Fish’s answer to that question that made his mark on criticism.

Throughout Paradise Lost, Fish argues, Milton again and again constructs his language “in such a way that [an] error must be made before it can be acknowledged by the surprised reader.” That isn’t an accident: in a sense, it takes the writerly distinction between “showing” and “telling” to its end-point. After all, the poem is about the Fall of Man, and what better way to illustrate that Fall than by demonstrating it—the fallen state of humanity—within the reader’s own mind? As Fish says, “the reader’s difficulty”—that is, the continual state of thinking one thing, only to find out something else—“is the result of the act that is the poem’s subject.” What, that is, were Adam and Eve doing in the garden, other than believing things were one way (as related by one slippery serpent) when actually they were another? And Milton’s point is that trusting readers to absorb the lesson by merely being told it is just what got the primordial pair in trouble in the first place: why Paradise Lost needs writing at all is because our First Parents didn’t listen to what God told them (You know: don’t eat that apple).

If Fish is right, then Milton concluded that just to tell readers, whether of his time or ours, isn’t enough. Instead, he concocted a fantastic kind of riddle: an artifact where, just by reading it, the reader literally enacts the Fall of Man within his own mind. As the lines of the poem pass before the reader’s eyes, she continually credits the apparent sense of what she is reading, only to be brought up short by a sudden change in sense. Which is all very well, it might be objected, but even if that were true about Paradise Lost (and not everyone agrees that it is), it’s something else to say that it has anything to do with baseball umpiring—or golf.

Yet it does, and for just the same reason that Paradise Lost applies to wrangling over the strike zone. One reason why we couldn’t institute a system that could possibly just not see one pitch over another is because, while certainly we could take or leave most pitches—nobody cares about the first pitch of a game, for instance, or the middle out of the seventh inning during a Cubs-Rockies game in April—there are some pitches that we must absolutely know about. And if we consider what gives those pitches more value than other pitches—and surely everyone agrees that some pitches have more worth than others—then what we have to arrive at is that baseball doesn’t just take place on a diamond, but also takes place in time. Baseball is a narrative, not a pictorial, art.

To put it another way, what Milton does in his poem is just what a good golf architect does for the golf course: it isn’t enough to be told you should take a five-iron off this tee, while on another a three wood. The golfer has to be shown it: what you thought was one state of affairs was in fact another. And not merely that—because that, in itself, would only be another kind of telling—but that the golfer—or, at least, the reflective golfer—must come to see the point as he traverses the course. If a golf hole, in short, is a kind of sentence, then the assumptions with which he began the hole must be dashed by the time he reaches the green.

As it happens, this is just what the Golf Club Atlas says about the fourth at Chechessee Creek, where a “classic misdirection play comes.” At the fourth tee, “the golfer sees a big, long bunker that begins at the start of the fairway and hooks around the left side.” But the green is to the right, which causes the golfer to think “‘I’ll go that way and stay away from the big bunker.’” Yet, because there is a line of four small bunkers somewhat hidden down the right side, and bunkers to the right near the green, “the ideal tee ball is actually left center.” “Standing behind the hole”—that is, once play is over—“the left to right angle of the green is obvious and clearly shows that left center of the fairway is ideal,” which makes the fourth “the cleverest hole on the course.” And it is, so I’d argue, because it uses precisely the same technique as Milton.

That, in turn, might be the basis for an argument for why getting yardages by hand (or rather, foot) so necessary to the process of professional golf at the highest level. As I mentioned, amateur golfers think golf is about hitting shots while professionals know that golf is selecting what shots to hit. Amateurs look at a golf hole and think, “What a pretty picture,” while a professional looks at one and thinks of the sequence of shots it would take to reach the goal. That’s why it is so that, even though so much of golf design is mostly conjured by way of pretty pictures, whether in oils or photographic, and it might be thought that pictures, since they are “artistic,” are antithetical to the mechanistic forces of computers, it might be thought that it is the beauty of golf courses that make the game irreducible to analysis—an idea that, in fact, gets things precisely wrong.

Machines, that is, can paint a picture of a hole that can’t be beat: just look at the innumerable golf apps available for smart phones. But computers can’t parse a sentence like “Time flies like an arrow; fruit flies like a banana.” While computers can call (nearly) every pitch over the course of a season, they don’t know why a pitch in the seventh inning of a World Series game is more important than a spring training game. If everything is right there in front of you, then computers or some other mechanical aids are quite useful; it’s only when the end of a process causes you to re-evaluate everything that came before that you are in the presence of the human. Working out yardages without the aid of a machine forces the kind of calculations that can see a hole in time, not in space—to see a hole as a sequence of events, not (as it were) a whole.

Golf isn’t just the ability to hit shots—it’s also, and arguably more significantly, the ability to decide what the best path to the hole is. One argument for why further automation wouldn’t harm the game in the slightest is the tale told by baseball umpiring: no matter how far technological answers are sought, it’s still the case that human beings must be involved in calling balls and strikes, even if not in quite the same way as now. Some people, that is, might read Milton’s warning about astronomy as saying that pursuing that avenue of knowledge is a blind alley, when what Milton might instead be saying is just that the mistake is to think that there could be an end to the pursuit: that is, that perfect information could yield perfect decision-making. We extend “human perception” all we like—it will not make a whit of difference.

Milton thought that was because of our status as Original Sinners, but it isn’t necessary to take that line to acknowledge limitations, whether they are of the human animal in general or just endemic to living in a material universe. Some people appear to take this truth as a bit of a downer: if we cannot be Gods, what then is the point? Others, and this seems to be the point of Paradise Lost, take this as the condition of possibility: if we were Gods, then golf (for example) would be kind of boring, as merely the attempt to mechanically re-enact the same (perfect) swing, over and over. But Paradise Lost, at least in one reading, seems to assure us that that state is unachievable. As technology advances, so too will human cleverness: Bobby Jones can never defeat Walter Hagen once and for all.

Yet, as the example of Bob Gibson demonstrates, trusting to the idea that, somehow, everything will balance out in the end is just as dewy-eyed as anything else. Sports can ebb and flow in popularity: look at horse racing or boxing. Baseball reacted to Gibson’s 13 shutouts and Denny McLaine’s 31 victories in 1968, as well as Carl Yastrzemski’s heroic charge to a .301 batting average, the lowest average ever to win the batting crown. Throughout the 1960s, says Bill James in The New Bill James Historical Abstract, Gibson and his colleagues competed in a pitcher’s paradise: “the rules all stacked in their favor.” In 1969, the pitcher’s mound was lowered from 15 to 10 inches high and the strike zone was squeezed too, from the shoulders to the armpits, and from the calves to the top of the knee. The tide of the rules began to swing the other way, until the offensive explosion of the 1990s.

Nothing, in other words, happens in a vacuum. Allowing perfect yardages, so I would suspect, advantages the ballstrikers at the expense of the crafty shotmakers. To preserve the game then—a game which, contrary to some views, isn’t always the same, and changes in response to events—would require some compensating rule change in response. Just what that might be is hard, for me at least, to say at the moment. But it’s important, if we are to still have the game at all, to know what it is and is not, what’s worth preserving and why we’d like to preserve it. We can sum it up, I think, in one sentence. Golf is a story, not a picture. We ought to keep that which allows golf to continue to tell us the stories we want—and, perhaps, need—to hear.

Windy Orders

Time flies like an arrow; fruit flies like a banana.
Modern Saying


There’s a story told at Royal Troon, site of the “Postage Stamp” par-three hole, about the lady golfer, playing into an extreme wind, who was handed her driver by her caddie. After she hit the shot, as the ball fell helplessly short against the gale, she shouted reproachfully, “You underclubbed me!” It’s a story that has a certain resonance for me—perhaps obviously—but also, more immediately, due to my present work at a golf course in South Carolina, where I have repaired following the arrival of snow in Chicago. It’s easy enough to imagine something similar occurring at Chechessee Creek’s 16th hole—which, if it did, might not furnish the material for a modest laugh so much as, in concurrence with the golf course’s next hole, demonstrate something rather more profound. 
     Chechessee Creek, the golf course where I am spending this late fall, is a design of the Coore/Crenshaw operation, and it’s very well known that Ben Crenshaw, one of the principals of the firm, considers Chicago Golf Club to be the epitome of good course design. It’s reflected in a number of features of the course: the elevated greens, the various “dunes” strewn about for no apparent reason. But it’s also true that Chicago Golf is, despite its much greater age, by far the more daring of the two courses: it has blind shots and incredibly risky greens where putts can not only fall off the green, but go bounding down the fairway twenty yards or more. There are places where at times it is better to hit a putt off the green deliberately—because that is the only way to get the ball to stop near the hole. Chechessee Creek, for good or ill, has none of these features.
     What it does have, however, is a sense of what David Mihm, writer of the EpicGolf website, calls “pacing.” “Golf is a game,” he points out, “that is experienced chronologically”—that is, it isn’t just the quality of the holes that is important, but also their situation within the golf course as a whole. “By definition,” he says, “part of a hole’s greatness must depend on where it falls in the round.” 
     Chicago Golf Club has that quality of pacing in abundance, starting with the very first hole, Valley. By means of a trompe l’oeil the hole, in reality a 450 yard monster of a par four, appears to be a quite sedate, much-shorter hole. It’s only upon seeing his drive “disappear” (into the concealed vale that gives the hole its name) that the golfer realizes that his eye has misled him. It’s a trick, sure, that would be fantastic on any hole—but is particularly appropriate on the first, since it signals to the golfer immediately—on the first shot of the day—that this is a different kind of golf course, and that he cannot trust what he sees. 
     I would not say that Chechessee Creek exemplifies that notion to the same degree; it may not be too much to wonder whether South Carolina, or at least the Lowcountry, Tidewater parts of it, might not be too level of a countryside really to lend itself to golf. (“All over the world,” says Anita Harris, the geologist turned tour guide in John McPhee’s monumental Annals of the Former World, “when people make golf courses they are copying glacial landscapes.” South Carolina, needless to say, did not experience the devastations of an ice sheet during the last Ice Age, or any other time.) Still, there is one set of holes that does exhibit what Mihm is talking about—and perhaps something more besides. 
     The sixteenth hole at Chechessee is, as perhaps might be put together, a long par three hole; so long, in fact, that it isn’t unlikely that a short hitter might use a driver there. But, of course, there is the small matter of pride to contend with—few (male) golfers ever want to concede that they needed a driver on a “short” hole. It’s something I saw often working at Medinah, when coming to the thirteenth hole—almost inevitably, someone would not hit the correct club because he took as it an affront to suggest hitting a driver or even a three wood. Fair enough, one supposes; these days, the long par three might be close to becoming a design cliche (and in any case, all iconic courses I have seen have one: Olympia Fields, Chicago Golf, and Butler do, as does Riviera). 
     Just having a long par three isn’t enough, obviously, to satisfy Mihm’s criteria, and it isn’t that alone that makes Chechessee unique or even interesting. What makes the course go is the hole that follows the sixteenth, the seventeenth (duh). It’s an intriguing design in its own right, because it is an example of a “Leven” hole. According to A Disorderly Compendium of Golf (and what better source?), Leven holes are modeled on the 7th at the Leven Links, a hole that no longer exists. The idea of it is simple: it is a short hole with an enormous hazard on one side of the fairway; at Chechessee, the hazard is a long-grassed and swampy depression. Thus, the question posed is, how much of the hazard will you dare? Bailing out to the side leaves the player with a poor, often obstructed view of the green; at Chechessee, that function is furnished by an enormous pine tree.
     Yet that dilemma alone isn’t the real crux of the matter—what matters is that the seventeenth follows the sixteenth. After all, at the sixteenth the golfer is tempted, by his own ego, not to hit enough club. Conversely, at the seventeenth, the golfer is tempted to hit too much club. The quandary posed at each tee, in short, is precisely the mirror of the other: failing to reach for a driver on the sixteenth can cause the player to demand it on the seventeenth—with disastrous consequences in each case. And that is interesting enough merely in terms of golf, to be sure. But what is likely far more intriguing about it is that the placing of these holes could not be better situated to illustrate—nay, perform—what two psychologists said about how the human mind actually works.  
      The psychologists were Daniel Kahneman and Amos Tversky—Kahneman recently received the Nobel Prize for his work with Tversky, who couldn’t receive the award because he died in 1996. What their work did was to uncover, by means of various experiments, some of the hidden pathways of the human mind: the “cognitive shortcuts” taken by the brain. One of these discoveries was the fact that human beings are “loss averse”—or, as Jonah Lehrer put it not long ago in the New Yorker, that for human beings “losses hurt more than gains feel good.” Kahneman and Tversky called this idea “prospect theory.” 
     The effect has been measured in golf. In a paper entitled “Is Tiger Woods Loss Averse? Persistent Bias In the Face of Experience, Competition, and High Stakes” two Wharton professors found that, for PGA Tour golfers, “the agony of a bogey seems to outweigh the thrill of a birdie.” What their data (from the PGA Tour’s ShotLink system, which measures the distance of every shot hit on tour) demonstrated was that tour players “make their birdie putts approximately two percentage points less often than they make comparable par putts.” Somehow, when pros are faced with a par putt instead of a birdie putt—even though they might be identical putts—they make the former slightly more than the latter. What that translates into is one stroke left on the table per tournament—and that leaves $1.2 million per year in prize money being given away by the top twenty players.
     It’s a phenomenon that’s been found again and again in many disparate fields: investors hold on to too many low-risk bonds, for instance, while condos stay on the market far too long (because their owners won’t reduce their price even during economic downturns), and NFL coaches will take the “sure thing” of a field goal even when it might actually hurt their chances of winning the game. This last, while being about sports, has also another dimension of application to golf: the way in which what can be called “social expectations” guides human decision-making. That is, how our ideas about how others judge us plays a role in our decisions.
     In the case of the NFL, studies have shown that coaches far more likely to make the decision to kick the ball—to punt or attempt a field goal—than they are to attempt a first down or a touchdown. This is so even in situations (such as on the opponent’s 2 yard line) where, say, scoring a field goal actually leaves the opponent in a better position: if the team doesn’t get the touchdown or first down, the opponent is pinned against his own goal line, whereas a field goal means a kickoff that will likely result in the opponent starting at the twenty yard line at least. NFL coaches, in other words, aren’t making these decisions entirely rationally. To some, it suggests that they are attempting to act conventionally: that is, by doing what everyone else does, each coach can “hide” better.
     What that suggests is just why golfers, faced with the sixteenth hole, are averse to select what’s actually the right club. Each golfer is, in a sense, engaged in an arms race with every other golfer: by taking more club than another, that implicitly cedes something to the player taking less. This, despite the fact that rationally speaking selecting a different club than another golfer does nothing towards the final score of each. Taking less club becomes a kind of auction—or as we might term it, a bidding war—but one where the risk of “losing face” is seen as more significant than the final score. 
     The same process is, if it exists at all, also at work on the seventeenth hole. But this time there’s an additional piece of information playing out in the golfer’s mind: whatever happened on the last hole. One plausible scenario—I’ve seen it happen—is that the player doesn’t take enough club on the sixteenth, and comes up short of the hole. Having made that decision, and been wrong, the golfer determines on the next hole to make the “sensible” choice, and lays up away from the hazard—leaving a difficult second shot to a small green. But here’s the thing: the “carry” on the tee shot on seventeen, which I’ve withheld until now, is only about 210 yards—which is about the same as that of the sixteenth hole. In other words, the reality is that—evaluated dispassionately—golfers should probably hit about the same club on each hole. If they don’t, it’s probably due to a collision between “prospect theory” and “pacing”—which is to say that the Coore and Crenshaw design of Chechessee Creek is, all things considered, clubbed about right.   

The Occult Charm of Chicago Golf Club … (And Why It Doesn’t Matter)

I was standing with a rake in my hand while Chip Beck, the former Ryder Cupper, was giving a sand lesson to Butler’s head pro in the bunker on the par-three eighth. The basic problem, Chip said, was that the club pro wasn’t zipping his right hand under the ball, which wasn’t then spinning enough. It’s the sort of “secret” information amateurs are always asking me about, which is not unreasonable of them. Amateurs can see the vast difference between their game and the pro game, and they intuit that that difference must be due to some dark gnosticism. Which is true, in a way: there are things about golf that you won’t know unless you’ve spent some serious time practicing and studying. Yet that isn’t occult knowledge—it isn’t due to some inborn birthright, it’s just a matter of spending the time. And that isn’t the same thing.

It’s a point that was brought home to me by looping a golf course that in many ways is directly opposed to Butler National: Chicago Golf Club, in Wheaton, Illinois. Chicago Golf Club is one of the five founding clubs of the United States Golf Association and the first club in America to have eighteen holes. It’s notoriously hard to get on to; routinely, people attempting to play the 100 top golf courses in America or the world report getting stuck on Chicago Golf, which might be the least accessible course in the world.

Chicago Golf Club is, in other words, as traditional a golf course as exists anywhere. Its first members were stalwarts of the old WASP aristocracy—one of the club’s first presidents was, it seems, Robert Todd Lincoln, Abraham Lincoln’s only surviving son. It is so traditional, indeed, that since—aside from St. Andrews and a very few other courses—it’s one of the oldest golf courses in the world, it might be said that there’s none more traditional.

One of the ways Chicago Golf expresses its traditionalism is by outright banning the use of laser rangefinders and other kinds of technology on their golf course. It’s a policy that’s directly opposed to the policy at Butler National, where every caddie is not only allowed, but required, to have a laser rangefinder so that exact distances can be computed. Not only to the pin, in fact, but to any other obstacles that might require measuring.

The distance between the two policies became stark to me during a round at Chicago Golf where I had a guest who was a Butler member. He asked me early on how I knew where the pin was? I said: I don’t. This nearly broke the poor man’s head, it was so alien to him. But as I explained to him, since very seldom at Chicago Golf does the location of the pin even matter, not knowing the actual yardage is, in a way, to the golfer’s advantage.

What that uncertainty does is get the golfer out of the mindless rote of this yardage equals this club, which is not only very dangerous but also kind of defeats the purpose of golf itself. Part of the game, at least as played at Chicago, is the intellectual satisfaction of solving the puzzle, not simply the purely physical act of hitting a good shot. Without that, the sachems of Chicago decree, there’s very little point to the game at all—without it, golf is merely a very long range game of HORSE.

From the perspective of Chicago Golf, then, what’s done at Butler isn’t really even golf; it’s merely a kind of calisthenic done in a pasture. But while that’s perhaps a seductive image of golf, there’s another possible view. From the perspective of Butler National, it’s possible to say, what’s done at Chicago Golf is a kind of primitive, perhaps animistic, worship of a dead or dying god. The difference between Butler and Chicago, in other words, maps rather neatly onto another divide in sports these days.

“It’s a battle,” writes Sean McIndoe of the ESPN website Grantland recently, “sports fans have come to know well over the years.” And that’s the conflict between the “analytical” types with their “new stats and theories,” and the “old-school thinkers” who “question how much can be learned from a spreadsheet.” It’s a war, if it can even be called that, that’s been fought out in golf for decades: ever since somebody decided that maybe it might be a good idea to put a bush at the hundred yard marker, golfers have become ever more analytical.

These days, in fact, pro golfers are adding yet another figure to their ever-growing entourages: in addition to caddie, swing coach, mental coach, fitness guru, and dietician, some players are adding statisticians. Before the 2012 British Open, according to Josh Sens’ story in Golf magazine back in July, Brandt Snedecker consulted with “an English numbers wiz” named Mark Horton who told the professional that while his driving and iron play was good, what really drove Snedecker’s game was putting: “you’re one of the best I’ve ever seen.” And that meant, according to Horton, that Snedecker’s game plan for the majors should be really simple: “‘Just hit the damn green!’”

Luke Donald’s rise to Number One in the world golf rankings is similar: Pat Goss, the coach at Northwestern University where Donald was educated, was “an early adopter of the new analytics,” and studying Donald’s numbers he found that while the Englishman was a good ballstriker, he wasn’t much of a wedge player. Donald took that insight, practiced his wedge play, and as a result climbed the rankings until there wasn’t any further to go.

All of this statistical analysis, of course, might be irritating to the shamen of Chicago Golf, who might say that this sort of deep number crunching is antithetical to the sport. “The statistics,” says Goss, “take the emotions out of it.” But what’s left of the sport if the emotions are taken out? we could imagine a Chicago Golf member asking. What’s the point of playing at all in that case?

The easy way out, to be sure, is just to say that both ideas of the game can co-exist: one, say, for professionals, and one for amateurs. Such a view might comport with our age—an age that has returned, in many ways, to an outlook that might have been familiar to Robert Todd Lincoln. Witness for instance the great reverence for Doris Kearns Goodwin’s book, Team of Rivals and the Stephen Spielberg movie based on it, Lincoln. The political commenter Thomas Frank noted recently how, around the time of Barack Obama’s election to the presidency in 2008, the political class in Washington was fairly bursting with praise for Team of Rivals, praise that likely reveals more about today’s Washington than Lincoln’s.

Goodwin’s book is about how—gasp!—Lincoln assembled a cabinet of advisors (a team) who were—double gasp!—once his political competitors (rivals!). Despite being demonstrably true of virtually every leader in virtually every field ever—what leader hasn’t had to preside over people who, had things gone differently, might have been giving him orders?—Frank noticed that, to “a modern-day Washington grandee,” the idea that the electorally-defeated could still hang around held the promise of “an election with virtually no consequences.” “No one,” that is, “is sentenced to political exile because he or she was on the wrong side: the presidency changes hands, but all the players still get a seat at the table.” Every kid’s a winner.

In a way, it’s a lovely idea: nobody has to be wrong in such a world. It’s just that, as Frank points out, the film not only praises the notion of Compromise but takes it a step further: Lincoln, the film, “justifies corruption.” In fact: Spielberg & Co. “have gone out of their way to vindicate political corruption.” Oh, you want to ban slavery? the film says. Maybe you won’t mind a few payoffs then. More worrisome, however, is that underlying that message is the suggestion that there’s even anything to be right about—after all, even Thaddeus Stevens was only motivated not by hatred for an institution that denied to humanity to millions, but because he lived with a black woman.

It’s easy, though, to decry the “relativism” of our age, and it’s always easy enough to find examples of wishywashiness. What would be better would be to note just what it is about our own time that specifically lends itself to such arguments. Fortunately, an example is near to hand—the learned of our age nearly all subscribe to a belief that is, more or less, like the following. “Evidence,” wrote the literary critic, Stanley Fish, in a recent piece in the New York Times, “is never an independent feature of the world.”

Or, to put it another way, “there is no such thing as ‘common observation’ or simply reporting the facts,” because just what constitutes facts is what is at issue. In that sense, “simple reporting is never simple and common observation is an achievement of history and tradition, not the result of just having eyes.” Which is just to say that, since it cannot be that one person might see things more clearly than another—both being formed not by the perspicacity of observational powers, but rather by a particular upbringing within a particular community—hence it is better to proceed by “consensus,” rather than deciding that one person is right and the other wrong.

Naturally, that sounds like a reasonable method to proceed by—certainly, one might think that it resulted in the fewest hurt feelings. Yet, someone with a long memory might notice that such procedures may violate what the ancients called the “laws of thought”; namely what’s known as the law of non-contradiction.

There are, the ancients said, three laws of thought—or at least, one law and two corollaries. The first of these is the law of identity; anyone who says “it is what it is” is quoting this law. It means a thing is that and not something else. The law of non-contradiction is, arguably, a special case of that law; the best statement of it is by the Arab, Avicenna.

“Anyone who denies the law of non-contradiction,” wrote the Arab philosopher Avicenna, “should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” It’s perhaps an extreme way to make the point, but it’s a pretty arresting image. What it means is that either one statement, or its opposite, is true—but not both. Either the correct way to golf is that practiced by Chicago Golf, or by Butler National.

Which is it? Well, what one might notice about Fish’s theory is that it says that, in order to judge a thing, you have to already be enmeshed into it, already know the code words and the deep meanings—the “secret,” hermeneutical, gnostic knowledge of insiders. It says, in other words, that the true meaning of golf is that envisioned by Chicago Golf.

The view from Butler National, however, that holds that choosing the right club is based on an accurate reading of the actual yardage, not a deep familiarity with the particular course and the particular weather of a certain day. Just so, it’s the sabermetricians who say that anyone equipped with the right stats, not decades of watching minor leaguers, can select the major league ballplayers of tomorrow. It isn’t necessary to have the kind of “deep” knowledge that places like Chicago Golf (or the old baseball scouts of “Moneyball”) implicitly argue is required—a position that, one might think, is obviously on the side of what used to be called “the people” against the interests of the powerful.  And yet, through some kind of alchemy, it’s people like Fish who are widely acclaimed to be “leftists” these days.

In reality, however, there’s not a lot of need for “secret” knowledge to understand events. In the case of the government shutdown, for example, the math shows that it is about 18% of the population (as represented in Congress) that is attempting to thwart the will of the other four-fifths. Which is to say that if the Left wants to tackle a project that might actually matter to the masses, it might do better to teach how to count, rather than how to read. “Secret knowledge,” that is, is something most people can’t afford—nor do they have, in reality, any need for it.