The “Hero” We Deserve

“He’s the hero Gotham deserves, but not the one it needs …”
The Dark Knight. (2008).

 

The election of Donald Trump, Peter Beinart argued the other day in The Atlantic, was precisely “the kind of democratic catastrophe that the Constitution, and the Electoral College in particular, were in part designed to prevent.” It’s a fairly common sentiment, it seems, in some parts of the liberal press: Bob Cesca, of Salon, argued back in October that “the shrieking, wild-eyed, uncorked flailing that’s taking place among supporters of Donald Trump, both online and off” made an “abundantly self-evident” case for “the establishment of the Electoral College as a bulwark against destabilizing figures with the charisma to easily manipulate [sic] low-information voters.”  Such arguments often seem to think that their opponents are dewy-eyed idealists, their eyes clouded by Frank Capra movies: Cesca, for example, calls the view in favor of direct popular voting an argument for “popular whimsy.” In reality however it’s the supposedly-liberal argument in favor of the Electoral College that’s based on a misperception: what people like Beinart or Cesca don’t see is that the Electoral College is not a “bulwark” for preventing the election of candidates like Donald Trump—but in fact a machine for producing them. They don’t see it because they do not understand how the Electoral College is built on a flawed knowledge of probability—an argument in turn that, perhaps horrifically, suggests that the idea that powered Trump’s campaign, the thought that the American leadership class is dangerously out of touch with reality, is more or less right.

To see just how ignorant we all are concerning that knowledge, ask yourself this question (as Distinguished Research Scientist of the National Board of Medical Examiners Howard Wainer asked several years ago in the pages of American Scientist): what are the counties of the United States with the highest distribution of kidney cancer? As it happens, Wainer noted, they “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, say, in view of the fact that rural areas tend to be freer of the pollution that infects the largest cities. But, Wainer continued, consider also that the American counties with the lowest distribution of kidney cancer … “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, Wainer remarks, due to “the poverty of the rural lifestyle.” After all, people in rural counties very often don’t receive the best medical care, tend to eat worse, and tend to drink too much and use too much tobacco. But wait—one of these stories has to be wrong, they can’t both be right. Yet as Wainer goes on to write, they both are true: rural American counties have both the highest and the lowest incidences of kidney cancer. But how?

To solve the seeming-mystery, consider a hypothetical example taken from the Nobel Prize-winner Daniel Kahneman’s magisterial book, Thinking: Fast and Slow. “Imagine,” Kahneman says, “a large urn filled with marbles.” Some of these marbles are white, and some are red. Now imagine “two very patient marble counters” taking turns drawing from the urn: “Jack draws 4 marbles on each trial, Jill draws 7.” Every time one of them draws an unusual sample—that is, a sample of marbles that is either all-red or all-white—each records it. The question Kahneman then implicitly asks is: which marble counter will draw more all-white (or all-red) samples?

The answer is Jack—“by a factor of 8,” Kahneman notes: Jack is likely to draw a sample of only one color more than twelve percent of the time, while Jill is likely to draw such a sample less than two percent of the time. But it isn’t really necessary to know high-level mathematics to understand that because Jack is drawing fewer marbles at a time, it is more likely that he will draw all of one color or the other than Jill is. By drawing fewer marbles, Jack is simultaneously more exposed to extreme events—just as it is more likely that, as Wainer has observed, a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category,” while conversely if that same county “has one cancer death it would be among the highest.” Because there are fewer people in rural American counties than urban ones, a rural county will have a more extreme rate of kidney cancer, either high or low, than an urban one—for the very same reason that Jack is more likely to have a set of all-white or all-red marbles. The sample size is smaller—and the smaller the sample size, the more likely it is that the sample will be an outlier.

So far, of course, I might be said to be merely repeating something everyone already knows—maybe you anticipated the point about Jack and Jill and the rural counties, or maybe you just don’t see how any of this has any bearing beyond the lesson that scientists ought to be careful when they are designing their experiments. As many Americans think these days, perhaps you think that science is one thing, and politics is something else—maybe because Americans have been taught for several generations now, by people as diverse as conservative philosopher Leo Strauss and liberal biologist Stephen Jay Gould, that the humanities are one thing and the sciences are another. (Which Geoffrey Harpham, formerly the director of the National Humanities Center, might not find surprising: Harpham has claimed that “the modern concept of the humanities” —that is, as something distinct from the sciences—“is truly native only to the United States.”) But consider another of Wainer’s examples: one drawn from, as it happens, the world of education.

“In the late 1990s,” Wainer writes, “the Bill and Melinda Gates Foundation began supporting small schools on a broad-ranging, intensive, national basis.” Other foundations supporting the movement for smaller schools included, Wainer reported, the Annenberg Foundation, the Carnegie Corporation, George Soro’s Open Society Institute, and the Pew Cheritable Trusts, as well as the U.S. Department of Education’s Smaller Learning Communities Program. These programs brought pressure—to the tune 1.7 billion dollars—on many American school systems to break up their larger schools (a pressure that, incidentally, succeeded in cities like Los Angeles, New York, Chicago, and Seattle, among others). The reason the Gates Foundation and its helpers cited for pressuring America’s educators was that, as Wainer writes, surveys showed that “among high-performing schools, there is an unrepresentatively large proportion of smaller schools.” That is, when researchers looked at American schools, they found the highest-achieving schools included a disproportionate number of small ones.

By now, you see where this is going. What all of these educational specialists didn’t consider—but Wainer’s subsequent research found, at least in Pennsylvania—was that small schools were also disproportionately represented among the lowest-achieving schools. The Gates Foundation (led, mind you, by Bill Gates) had simply failed to consider that of course small schools might be overrepresented among the best schools, simply because schools with smaller numbers of students are more likely to be extreme cases. (Something that, by the way, also may have consequences for that perennial goal of professional educators: the smaller class size.) Small schools tend to be represented at the extremes not for any particular reason, but just because that’s how math works.

The inherent humor of a group of educators (and Bill Gates) not understanding how to do basic mathematics is, admittedly, self-evident—and incidentally good reason not to take the testimony of “experts” at face value. But more significantly, it also demonstrates the very real problem here: if highly-educated people (along with college dropout Gates) cannot see the flaws in their own reasoning while discussing precisely the question of education, how much more vulnerable is everyone else to flaws in their thinking? To people like Bob Cesca or Peter Beinart (or David Frum; cf. “Noble Lie”), of course, the answer to this problem is to install more professionals, more experts, to protect us from our own ignorance: to erect, as Cesca urges, a “firewall[…] against ignorant populism.” (A wording that, one imagines, reflects Cesca’s mighty struggle to avoid the word “peasants.”) The difficulty with such reasoning, however, is that it ignores the fact that the Electoral College is an instance of the same sort of ignorance as that which bedeviled the Gates Foundation—or that you may have encountered in yourself when you considered the kidney cancer example above.

Just as rural American counties, that is, are more likely to have either lots of cases—or very few cases—of kidney cancer, so too must those very same sparsely-populated states be more likely to vote in an extreme fashion inconsistent with the rest of the country. For one, it’s a lot cheaper to convince the voters of Wyoming (the half a million or so of whom possess not only a congressman, but also two senators) than the voters of, say, Staten Island (who, despite being only slightly less in number than the inhabitants of Wyoming, have to share a single congressman with part of Brooklyn). Yet the existence of the Electoral College, according to Peter Beinart, demonstrates just how “prescient” the authors of the Constitution were: while Beinart says he “could never have imagined President Donald Trump,” he’s glad that the college is cleverly constructed so as to … well, so far as I can tell Beinart appears to be insinuating that the Electoral College somehow prevented Trump’s election—so, yeeaaaah. Anyway, for those of us still living in reality, suffice it to say that the kidney cancer example illustrates just how dividing one big election into fifty smaller ones inherently makes it more probable that some of those subsidiary elections will be outliers. Not for any particular reason, mind you, but simply because that’s how math works—as anyone not named Bill Gates seems intelligent enough to understand once it’s explained.

In any case, the Electoral College thusly does not make it less likely that an outlier candidate like Donald Trump is elected—but instead more likely that such a candidate would be elected. What Beinart and other cheerleaders for the Electoral College fail to understand (either due to ignorance or some other motive) is that the Electoral College is not a “bulwark” or “firewall” against the Donald Trumps of the world. In reality—a place that, Trump has often implied, those in power seem not to inhabit any more—the Electoral College did not prevent Donald Trump from becoming the president of the United States, but instead (just as everyone witnessed on Election Day), exactly the means by which the “short-fingered vulgarian” became the nation’s leader. Contrary to Beinart or Cesca, the Electoral College is not a “firewall” or some cybersecurity app—it is, instead, a roulette wheel, and a biased one at that.

Like a sucker can expect that, so long as she stays at the roulette wheel, she will eventually go bust, thusly so too can the United States expect, so long as the Electoral College exists, to get presidents like Donald Trump: “accidental” presidencies, after all, have been an occasional feature of presidential elections since at least 1824, when John Quincy Adams was elected despite the fact that Andrew Jackson had won the popular vote. If not even the watchdogs of the American leadership class—much less that class itself—can see the mathematical point of the argument against the Electoral College, that in and of itself is pretty good reason to think that, while the specifics of Donald Trump’s criticisms of the Establishment during the campaign might have been ridiculous, he wasn’t wrong to criticize it. Donald Trump then may not be the president-elect America needs—but he might just be the president people like Peter Beinart and Bob Cesca deserve.

 

Advertisements

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.

The End of Golf?

And found no end, in wandering mazes lost.
Paradise Lost, Book II, 561

What are sports, anyway, at their best, but stories played out in real time?
Grantland “Home Fields” Charles P. Pierce

We were approaching our tee shots down the first fairway at Chechessee Creek Golf Club, where I am wintering this year, when I got asked the question that, I suppose, will only be asked more and more often. As I got closer to the first ball I readied my laser rangefinder—the one that Butler National Golf Club, outside of Chicago, finally required me to get. The question was this: “Why doesn’t the PGA Tour allow rangefinders in competition?” My response was this, and it was nearly immediate: “Because that’s not golf.” That’s an answer that, perhaps, appeared clearer a few weeks ago, before the United States Golf Association announced a change to the Rules of Golf in conjunction with the Royal and Ancient of St. Andrews. It’s still clear, I think—as long as you’ll tolerate a side-trip through both baseball and, for hilarity’s sake, John Milton.

Throughout the rest of this year, any player in a tournament conducted under the Rules of Golf would be subjected to disqualification should she or he take out their cell phone during a round to consult a radar map of incoming weather. But on the coming of the New Year, that will be permitted: as the Irish Times wonders, “Will the sight of a player bending down to pull out a tuft of grass and throwing skywards to find out the direction of the wind be a thing of the past?” Perhaps not, but the new decision certainly says where the wind is blowing in Far Hills. Technology is coming to golf, as, it seems, to everything.

At some point, and it isn’t likely that far away, all relevant information will likely be available to a player in real time: wind direction, elevation, humidity, and, you know, yardage. The question will be, is that still golf? When the technology becomes robust enough, will the game be simply a matter of executing shots, as if all the great courses of the world were simply your local driving range? If so, it’s hard to imagine the game in the same way: to me, at least, part of the satisfaction of playing isn’t just hitting a shot well, it’s hitting the correct shot—not just flushing the ball on the sweet spot, but seeing it fly (or run) up toward the pin. If everyone is hitting the correct club every time, does the game become simply a repetitive exercise to see whose tempo is particularly “on” that day?

Amateur golfers think golf is about hitting shots, professionals know that golf is selecting what shots to hit. One of the great battles of golf, to my mind, is the contest of the excellent ball-striker vs. the canny veteran. Bobby Jones vs. Walter Hagen, to those of you who know your golf history: since Jones was perhaps known for the purity of his hits while Hagen, like Seve Ballesteros, for his ability to recover from his impure ones. Or we can generalize the point and say golf is a contest between ballstriking and craftiness. If that contest goes, does the game go with it?

That thought would go like this: golf is a contest because Bobby Jones’ ability to hit every shot purely is balanced by Walter Hagen’s ability to hit every shot correctly. That is, Jones might hit every shot flush, but he might not hit the right club; while Hagen might not hit every shot flush, but he will hit the correct club, or to the correct side of the green or fairway, or the like. But if Jones can get the perfection of information that will allow him to hit the correct club more often, that might be a fatal advantage—paradoxically ending the game entirely because golf becomes simply an exercise in who has the better reflexes. The idea is similar to the way in which a larger pitching mound became, in the late 1960s, such an advantage for pitchers that hitting went into a tailspin; in 1968 Bob Gibson became close to unhittable, issuing 268 strikeouts and possessing a 1.12 ERA.

As it happens, baseball is (once again) wrestling with questions very like these at the moment. It’s fairly well-known at this point that the major leagues have developed a system called PITCH/fx, which is capable of tracking every pitch thrown in every game throughout the season—yet still, that system can’t replace human umpires. “Even an automated strike zone,” wrote Ben Lindbergh in the online sports magazine Grantland recently, “would have to have a human element.” That’s for two reasons. One is the more-or-less obvious one that, while an automated system has no trouble judging whether a pitch is over the plate or not (“inside” or “outside”) it has no end of trouble judging whether a pitch is “high” or “low.” That’s because the strike zone is judged not only by each batter’s height, but also by batting stance: two players who are the same height can still have different strike zones because one might crouch more than another, for instance.

There is, however, a perhaps-more rooted reason why umpires will likely never be replaced: while it’s true that major league baseball’s PITCH/fx can judge nearly every pitch in every game, every once in (a very great) while the system just flat out doesn’t “see” a pitch. It doesn’t even register that a ball was thrown. So all the people calling for “robot umpires” (it’s a hashtag on Twitter now) are, in the words of Dan Brooks of Brooks Baseball (as reported by Lindbergh), “willing to accept a much smaller amount of inexplicable error in exchange for a larger amount of explicable error.” In other words, while the great majority of pitches would likely be called more accurately, it’s also so that the mistakes made by such a system would be a lot more catastrophic than mistakes made by human umpires. Imagine, say, Zack Greinke was pitching a perfect game—and the umpire just didn’t see a pitch.

These are, however, technical issues regarding mechanical aids, not quite the existential issues of the existence of what we might term a perfectly transparent market. Yet they demonstrate just how difficult such a state would, in practical terms, be to achieve: like arguing whether communism or capitalism are better in their pure state, maybe this is an argument that will never become anything more than a hypothetical for a classroom. The exercise however, like seminar exercises are meant to, illuminates something about the object in question: since a computer doesn’t know the difference between the first pitch of April and the last pitch of the World Series’ last game—and we do—that I think tells us something about what we value about both baseball and golf.

Which is what brings up Milton, since the obvious (ha!) lesson here could be the one that Stanley Fish, the great explicator of John Milton, says is the lesson of Milton’s Paradise Lost: “I know that you rely upon your senses for your apprehension of reality, but they are unreliable and hopelessly limited.” Fish’s point refers to a moment in Book III, when Milton is describing how Satan lands upon the sun:

There lands the Fiend, a spot like which perhaps
Astronomer in the Sun’s lucent Orb
Through his glaz’d optic Tube yet never saw.

Milton compares Satan’s arrival on the sun to the sunspots that Galileo (whom Milton had met) witnessed through his telescope—at least, that is what the first part of the thought appears to imply. The last three words, however—yet never saw—rip away that certainty: the comparison that Milton carefully sets up between Satan’s landing and sunspots he then tells the reader is, actually, nothing like what happened.

The pro-robot crowd might see this as a point in favor of robots, to be sure—why trust the senses of an umpire? But what Fish, and Milton, would say is quite the contrary: Galileo’s telescope “represents the furthest extension of human perception, and that is not enough.” In other words, no matter how far you pursue a technological fix (i.e., robots), you will still end up with more or less the problems you had before, only they might be more troublesome than the ones you have now. And pretty obviously, a system that was entirely flawless for every pitch of the regular season—which encompasses, remember, thousands of games just at the major league level, not even to mention the number of individual pitches thrown—and then just didn’t see a strike three that (would have) ended a Game 7 is not acceptable. That’s not really what I meant by “not golf” though.

What I meant might best be explained by reference to (surprise, heh) Fish’s first major book, the one that made his reputation: Surprised by Sin: The Reader in Paradise Lost. That book set out to hurdle what had seemed to be an unbridgeable divide, one that had existed for nearly two centuries at least: a divide between those who read the poem (Paradise Lost, that is) as being, as Milton asked them, intended to “justify the ways of God to men,” and those who claimed, with William Blake, that Milton was “of the Devil’s party without knowing it.” Fish’s argument was quite ingenious, which was in essence was that Milton’s technique was true to his intention, but that, misunderstood, could easily explain how some could mis-read him so badly. Which is rather broad, to be sure—as in most things, the Devil is in the details.

What Fish argued was that Paradise Lost could be read as one (very) long instance of what are now called “garden path” sentences, which are grammatical sentences that begin in a way that appear to direct the reader toward one interpretation, only to reveal their true meaning at the end. Very often, they require the reader to go back and reread the sentence, such as in the sentence, “Time flies like an arrow; fruit flies like a banana.” Another example is Emo Philips’ line “I like going to the park and watching the children run around because they don’t know I’m using blanks.” They’re sentences, in other words, where the structure implies one interpretation at the beginning, only to have that interpretation snatched away by the sentence’s end.

Fish argued that Paradise Lost was, in fact, full of these moments—and, more significantly, that they were there because Milton put them there. One example Fish uses is just that bit from Book III, where Satan gets compared, in detail, with the latest developments in solar astronomy—until Milton jerks the rug out with the words “yet never saw.” Satan’s landing is just like a sunspot, in other words … except it isn’t. As Fish says,

in the first line two focal points (spot and fiend) are offered the reader who sets them side by side in his mind … [and] a scene is formed, strengthened by the implied equality of spot and fiend; indeed the physicality of the impression is so persuasive that the reader is led to join the astronomer and looks with him through a reassuringly specific telescope (‘glaz’d optic Tube) to see—nothing at all (‘yet never saw’).

The effect is a more-elaborate version of that of sentences like “The old man the boats” or “We painted the wall with cracks”—typical examples of garden-path sentences. Yet why would Milton go to the trouble of constructing the simile if, in reality, the things being compared are nothing alike? It’s Fish’s answer to that question that made his mark on criticism.

Throughout Paradise Lost, Fish argues, Milton again and again constructs his language “in such a way that [an] error must be made before it can be acknowledged by the surprised reader.” That isn’t an accident: in a sense, it takes the writerly distinction between “showing” and “telling” to its end-point. After all, the poem is about the Fall of Man, and what better way to illustrate that Fall than by demonstrating it—the fallen state of humanity—within the reader’s own mind? As Fish says, “the reader’s difficulty”—that is, the continual state of thinking one thing, only to find out something else—“is the result of the act that is the poem’s subject.” What, that is, were Adam and Eve doing in the garden, other than believing things were one way (as related by one slippery serpent) when actually they were another? And Milton’s point is that trusting readers to absorb the lesson by merely being told it is just what got the primordial pair in trouble in the first place: why Paradise Lost needs writing at all is because our First Parents didn’t listen to what God told them (You know: don’t eat that apple).

If Fish is right, then Milton concluded that just to tell readers, whether of his time or ours, isn’t enough. Instead, he concocted a fantastic kind of riddle: an artifact where, just by reading it, the reader literally enacts the Fall of Man within his own mind. As the lines of the poem pass before the reader’s eyes, she continually credits the apparent sense of what she is reading, only to be brought up short by a sudden change in sense. Which is all very well, it might be objected, but even if that were true about Paradise Lost (and not everyone agrees that it is), it’s something else to say that it has anything to do with baseball umpiring—or golf.

Yet it does, and for just the same reason that Paradise Lost applies to wrangling over the strike zone. One reason why we couldn’t institute a system that could possibly just not see one pitch over another is because, while certainly we could take or leave most pitches—nobody cares about the first pitch of a game, for instance, or the middle out of the seventh inning during a Cubs-Rockies game in April—there are some pitches that we must absolutely know about. And if we consider what gives those pitches more value than other pitches—and surely everyone agrees that some pitches have more worth than others—then what we have to arrive at is that baseball doesn’t just take place on a diamond, but also takes place in time. Baseball is a narrative, not a pictorial, art.

To put it another way, what Milton does in his poem is just what a good golf architect does for the golf course: it isn’t enough to be told you should take a five-iron off this tee, while on another a three wood. The golfer has to be shown it: what you thought was one state of affairs was in fact another. And not merely that—because that, in itself, would only be another kind of telling—but that the golfer—or, at least, the reflective golfer—must come to see the point as he traverses the course. If a golf hole, in short, is a kind of sentence, then the assumptions with which he began the hole must be dashed by the time he reaches the green.

As it happens, this is just what the Golf Club Atlas says about the fourth at Chechessee Creek, where a “classic misdirection play comes.” At the fourth tee, “the golfer sees a big, long bunker that begins at the start of the fairway and hooks around the left side.” But the green is to the right, which causes the golfer to think “‘I’ll go that way and stay away from the big bunker.’” Yet, because there is a line of four small bunkers somewhat hidden down the right side, and bunkers to the right near the green, “the ideal tee ball is actually left center.” “Standing behind the hole”—that is, once play is over—“the left to right angle of the green is obvious and clearly shows that left center of the fairway is ideal,” which makes the fourth “the cleverest hole on the course.” And it is, so I’d argue, because it uses precisely the same technique as Milton.

That, in turn, might be the basis for an argument for why getting yardages by hand (or rather, foot) so necessary to the process of professional golf at the highest level. As I mentioned, amateur golfers think golf is about hitting shots while professionals know that golf is selecting what shots to hit. Amateurs look at a golf hole and think, “What a pretty picture,” while a professional looks at one and thinks of the sequence of shots it would take to reach the goal. That’s why it is so that, even though so much of golf design is mostly conjured by way of pretty pictures, whether in oils or photographic, and it might be thought that pictures, since they are “artistic,” are antithetical to the mechanistic forces of computers, it might be thought that it is the beauty of golf courses that make the game irreducible to analysis—an idea that, in fact, gets things precisely wrong.

Machines, that is, can paint a picture of a hole that can’t be beat: just look at the innumerable golf apps available for smart phones. But computers can’t parse a sentence like “Time flies like an arrow; fruit flies like a banana.” While computers can call (nearly) every pitch over the course of a season, they don’t know why a pitch in the seventh inning of a World Series game is more important than a spring training game. If everything is right there in front of you, then computers or some other mechanical aids are quite useful; it’s only when the end of a process causes you to re-evaluate everything that came before that you are in the presence of the human. Working out yardages without the aid of a machine forces the kind of calculations that can see a hole in time, not in space—to see a hole as a sequence of events, not (as it were) a whole.

Golf isn’t just the ability to hit shots—it’s also, and arguably more significantly, the ability to decide what the best path to the hole is. One argument for why further automation wouldn’t harm the game in the slightest is the tale told by baseball umpiring: no matter how far technological answers are sought, it’s still the case that human beings must be involved in calling balls and strikes, even if not in quite the same way as now. Some people, that is, might read Milton’s warning about astronomy as saying that pursuing that avenue of knowledge is a blind alley, when what Milton might instead be saying is just that the mistake is to think that there could be an end to the pursuit: that is, that perfect information could yield perfect decision-making. We extend “human perception” all we like—it will not make a whit of difference.

Milton thought that was because of our status as Original Sinners, but it isn’t necessary to take that line to acknowledge limitations, whether they are of the human animal in general or just endemic to living in a material universe. Some people appear to take this truth as a bit of a downer: if we cannot be Gods, what then is the point? Others, and this seems to be the point of Paradise Lost, take this as the condition of possibility: if we were Gods, then golf (for example) would be kind of boring, as merely the attempt to mechanically re-enact the same (perfect) swing, over and over. But Paradise Lost, at least in one reading, seems to assure us that that state is unachievable. As technology advances, so too will human cleverness: Bobby Jones can never defeat Walter Hagen once and for all.

Yet, as the example of Bob Gibson demonstrates, trusting to the idea that, somehow, everything will balance out in the end is just as dewy-eyed as anything else. Sports can ebb and flow in popularity: look at horse racing or boxing. Baseball reacted to Gibson’s 13 shutouts and Denny McLaine’s 31 victories in 1968, as well as Carl Yastrzemski’s heroic charge to a .301 batting average, the lowest average ever to win the batting crown. Throughout the 1960s, says Bill James in The New Bill James Historical Abstract, Gibson and his colleagues competed in a pitcher’s paradise: “the rules all stacked in their favor.” In 1969, the pitcher’s mound was lowered from 15 to 10 inches high and the strike zone was squeezed too, from the shoulders to the armpits, and from the calves to the top of the knee. The tide of the rules began to swing the other way, until the offensive explosion of the 1990s.

Nothing, in other words, happens in a vacuum. Allowing perfect yardages, so I would suspect, advantages the ballstrikers at the expense of the crafty shotmakers. To preserve the game then—a game which, contrary to some views, isn’t always the same, and changes in response to events—would require some compensating rule change in response. Just what that might be is hard, for me at least, to say at the moment. But it’s important, if we are to still have the game at all, to know what it is and is not, what’s worth preserving and why we’d like to preserve it. We can sum it up, I think, in one sentence. Golf is a story, not a picture. We ought to keep that which allows golf to continue to tell us the stories we want—and, perhaps, need—to hear.

Miracle—Or Meltdown?—At Medinah

Very sensible men have declared that they were fully impressed at such a time with the conviction that it was the burning of the world.
—Frederick Law Olmstead
“Chicago In Distress”
The Nation
9 Nov. 1871

“An October sort of city even in spring,” wrote the poet about Chicago. Perhaps that’s why the PGA of America came to Chicago in September, thus avoiding that month of apocalyptic fires and baserunners who forget to tag second. But as it happens, even the Ryder Cup team couldn’t escape the city’s aura by arriving a month early: the Americans still crashed-and-burned during the singles matches on the final day. Ascribing the American loss to “Chicago” is however a romantic kind of explanation—a better one might concern a distinction that golfers of all skill levels ought to think about: the difference between a bad shot and the wrong shot.

The conclusive match at this year’s Ryder Cup was probably that between James Furyk (ha!) and Sergio Garcia, the match that drew the European team level with that of the Americans. After winning the first five matches of the day, the Europeans had suffered setbacks at the hands of the two Johnsons, Dustin and Zach, who had slowed the European charge by winning their two matches. Had Furyk won his match, the American team would have held onto the lead, and since Jason Dufner ended up winning his match immediately afterwards, the United States would only have needed a half in either Steve Stricker’s or Tiger Woods’ matches to win the Cup.

Furyk was leading his match late, one up through 16, and it looked as though he had his match in hand when, in Furyk’s words, he misjudged the wind—it was “a little confusing to the players”—and ended up in the back bunker, where he chipped out and left himself “about a 12-footer straight uphill that I misread.” Furyk went on to say that “I heard that most players missed that putt out to the right today.” Furyk missed his putt by leaving it out to the right.

On the 18th Furyk made another series of miscues: first he hit his drive too far right—he commented later that he “was actually surprised it was in the bunker.” It’s a comment I find difficult to understand: if you know the hole, you know that the 18th tee calls for a draw shot, certainly not a fade, which is to say either that Furyk did not understand the hole (which seems unlikely) or that he completely mishit it. And that raises the question of why he did not understand why he was in the bunker: on a course like Medinah, any mistake of execution—which is essentially what Furyk admitted to—is bound to be punished.

Next, Furyk said that he hit a “very good” second shot, but that “the wind was probably a little bit more right-to-left than it was into [towards]” him and so he “was a little surprised to see [the shot] went as long as it [did].” From there, he said he hit his “first putt exactly how I wanted … but it just kept trickling out,” and his second putt “never took the break.” What each of these shots have in common, notice, is that they are mistakes of judgment, rather than execution: it wasn’t that Furyk hit bad shots, it’s that he hit the wrong shots.

That’s an important distinction to make for any golfer: anyone can hit a bad shot at any time (witness, for instance, Webb Simpson’s cold-shank on Medinah’s 8th hole of Sunday’s singles matches, which is as of this writing viewable at cbssports.com.) Bad shots are, after all, part of golf; as the British writer once wrote of the American Walter Hagen, “He makes more bad shots in a single season than Harry Vardon did from 1890 to 1914, but he beats more immaculate golfers because ‘three of those and one of them’ counts four and he knows it.” Hagen himself said that he expected to “make at least seven mistakes a round,” and so when he hit a bad it one it was “just one of the seven.” But wrong shots are avoidable.

Bad shots are avoidable because they depend not on the frailties of the human body (or, should one wish to extend the thought to other realms, to the physical world entirely) but on the powers of the human mind. In other words the brain, if it isn’t damaged or impaired in some way, ought to arrive at the correct decision if it is in possession of two things: information and time. Since Furyk was playing golf and not, say, ice hockey, we can say that the “time” dimension was not much of an issue for him. Thus, the mistakes Furyk made must have been due to having possession of bad or incomplete information.

It’s at this point that it becomes clear that Furyk’s loss, and that of the American team, was not due to Furyk’s decisions or those of any other player. If Furyk lost because he hit wrong shots, that is, the American side allowed that mistake to metastasize. While the matches were going on John Garrity of Sports Illustrated pointed out, as David Dusek of Golf.com paraphrased it afterwards, that “no one on the U.S. team communicated to the matches behind them that 18 was playing short”—as witness Phil Mickelson bombing his approach over the 18th green—“and that the putt coming back down the hill didn’t break.” Garrity himself later remarked that while he didn’t think much of “the whole ‘cult of the captain’ trend,” he would concede that captains “can lose a Ryder Cup.” “Surely,” he thought, “somebody was supposed to tell the later players how 18 was playing.” On the U.S. side, in short, there wasn’t a system to minimize errors of judgment by distributing, or sharing, information.

That’s a mistake that no individual player can shoulder, because it ultimately falls on the American captain Davis Love III. The golf press is fond of citing the “old saw” that the captains don’t hit any shots in the Ryder Cup. Yet only somebody who isn’t involved in hitting shots—somebody who can survey the whole course—can avoid the mistake observed by Garrity. As a Chicagoan could tell you, any cow can kick over a lantern. But as a Southerner like Love might tell you, only another kind of barnyard animal would not think to tell the neighbors about a barn ablaze.

How To Make Money By Staring Blankly Into Space

It was the second ball, I suppose, that did me in. Well-struck, the last I saw of it from my position next to my golfer—a guest of the member—was after it disappeared in the trees up the right side of hole 10 on Course #3. This was after I’d already lost the guest’s second shot, also in the trees on the right side, also after I’d seen it about as well it could be seen, at least from that position. The other caddie in our group, Knuckles, who as it happened was standing with his golfers some fifty yards away, saw the ball about as well as I did, and we spent quite a bit of time looking for both balls, though they were essentially unfindable in the leaves of an autumn afternoon in October at Medinah. The member—whose name is, incidentally, plastered all over the city’s roads—was unhappy. But the light of October isn’t that of July.

That isn’t meant as a metaphor; not entirely anyway. As the sun descends in the sky during its slow roll towards the winter solstice, the angle at which light strikes a flying golf ball changes, which (for reasons any beginning physicist could probably explain better) affects the ability to track it through the sky. That doesn’t mean it’s impossible, sure; there’s tricks to following a ball in flight that become second nature after a while.

What I’ve found after so many years of following golf balls on very different trajectories, curves, and directions is that there are a couple of rules of thumb to the job—a job that begins even before the players get to the tee. It’s necessary to be in a good position, for instance: far enough from the tee to be closer to the point of landing than the point of ball-club contact, first of all, but not so far from the tee that the ball can’t be discovered immediately after impact. Usually somewhere more than 200 yards is good.

It’s also useful to be in a position where the sun is directly behind you, as if taking a photograph. That way, more sunlight can be reflected off the ball; it is a very small object flying somewhere around 150 miles an hour. It’s hard to spot. For that same reason, it’s better to be somewhere above the player hitting the shot, because that way there’s a better possibility for more photons to strike your retina: the source of those photons, the sun, is always above the ball, so if you are too then you’ve got a better shot at intercepting some of the photons. (At Chicago Highlands, where I’ve worked occasionally since they opened, I always tried—and taught their new caddies to try—to climb the artificial dunes in the landscape in order to be above the tee box.)

Positioning, however, will only get you so far—even the best position is worthless if you can’t pick up the ball in motion. This is something of a skill, I’ve found. It’s always kind of surprising to me, when I’m playing myself on some public course I’ve managed to squeeze out on as a single on some late afternoon, how rarely my playing partners can see the golf ball flying off the tee. It took me a long time to discover just why: it’s not because they’re blind, it’s because they are, quite literally, looking at the thing wrong.

Again, that isn’t a metaphor. The way to look at the thing right is, like much of golf, highly counterintuitive: people believe that, because they are looking for a small object (the ball) against a very large background (the sky), the thing to do is to focus very tightly and try, like a searchplane hunting for survivors, to scan methodically across the whole sky. Unfortunately, because the sky is so big, this is impossible considering both how much area there is to look at and how fast the ball is traveling. The way to look for a golf ball isn’t, for that reason, to try to look for the ball. The way to do it is to let the ball look for you.

That sounds ridiculous, surely, but it has a physiological basis. Human beings have two ways to track objects in motion, which the psychologists call smooth pursuit tracking and saccadic tracking. Smooth pursuit is the method the brain uses when the object being tracked is relatively predictable; that is, if “you know which way a target will move, or know the target trajectory,” and “especially if you know exactly when the motion will start,” as the relevant article in Wikipedia reports. Smooth pursuit is, as the name implies, steady and orderly. This system is highly developed in human beings (perhaps unsurprising since humans are hunting animals), and allows people to track even objects that are momentarily invisible—due, say, to a passing cloud or patch of trees—and develops as early as six months of age. Based on my experience, this is the system that most people use to track golf balls in flight, which maybe makes sense since most people actually can predict precisely when a ball is put in motion—they’re watching their buddy hit a shot.

Things are different, though, when you’re out forecaddieing. You aren’t able to predict just when contact is made, or at least not readily. What your brain uses in that situation is what the psychologists call saccadic movement, which sounds complicated until you know that it’s what is enabling you to read this very page. When reading, the psychologists have found, your eyes don’t relentlessly read each letter and word in order; they skip around, somewhat randomly, from the beginning of a sentence (or paragraph) to its end, from the top of a page to its end. Your ability to understand a sentence can be thrown off by constructing a sentence in an odd way. (Different ways of constructing sentences may obstruct your understanding. Comprehension inhibition results by sentence construction strangeness. Etc.)

It’s possible to discover saccadic movement by looking in a mirror: look directly at your eyes, then shift your gaze. What you’ll find is that you can never actually see your eyes in motion: all you’ll see is the same steady look you started with, only with your eyes resting on a different space. What that indicates is that your mind suppresses the blurry images that occur when the eye is in motion. That’s why, when reading, you have the impression that you are just following the words in order … when in reality you aren’t. The mind—as distinct from the eye—only allows you to see stable images. That’s one reason why it’s so hard to see a golf ball in flight. A golf ball is blurry, and your brain doesn’t want you to see blurry stuff. That’s one reason why there are lines painted on the road: they provide a stable image for your mind to process even as you zip along.

Your brain’s allergy to blurriness, though, is just what saccadic tracking is designed to take advantage of: the salient point about a golf ball in flight that differentiates it from virtually everything else in the sky is that it’s moving. Or to put it another way, a golf ball in flight is blurry. That’s how saccadic movement can help: by bouncing around randomly, eventually your eye can detect that part of the sky that, in effect, your brain finds unpleasant, at which point you can engage your smooth pursuit system.

To find a golf ball in flight, in other words, it’s best—in effect—not to look for it. Or, as I said above, to let the ball look for you. That’s also why it’s better to be some distance from the ball at impact: standing near it at impact means relying on your smooth pursuit system right from the start, which is fine if the ball is well-struck (and thus predictable) but not so much if it isn’t, which of course is also just when it’s most important to see the ball in flight. Which in turn is perhaps why is often easier to see tee shots hit with a driver from a forecaddie position than it is to see approach shots hit with a shorter club (which presumably don’t fly as far, which would seem to imply they’d be easier to see) when you’re standing next to the player.

Most members, I’d hasten to say, understand something of how this works, even if just in an unconscious way—I doubt very many could explain in the depth I’ve taken here—and thus are usually a lot more forgiving if you lose an approach shot that darts offline than if you lose a tee shot. (That’s also why members usually allow you the time to get into a good position away from the ball when hitting a shot from off the fairway, even if they aren’t able to explain just why.) This particular member, however, wasn’t.

Why not? Or to put it another way, why was he in effect demanding that I somehow elude the laws of motion and (more significantly) the means whereby human beings perceive that motion? A lot of people, I suspect, would chalk it up to something endemic to golf itself—and a lot of them would trace it to the sort of thing that Charlie Sifford, for example, the first African-American to hold a PGA Tour card, was referring to recently (all right, more than a month ago) in an interview with the Los Angeles Times’ Bill Plaschke.

“[Bleep] Augusta,” Sifford said.

“When I was good enough to play there,” Sifford went on to explain, “the Masters never invited me, so why would they invite me now?” (They’d invite him, if they did invite him, because Charlie Sifford is the one man responsible for opening the PGA Tour to everyone, because he won the “Negro National Open”—essentially, the U.S. Open for black golfers—for five years running beginning in 1952, because he won twice on the PGA Tour when he finally was allowed to play, and because he won the Senior PGA Championship, long after his prime.) But, it seems, Augusta is not the type of place that could admit it was ever wrong.

A lot of people, I think, would chalk up Augusta’s refusal of Charlie Sifford to simple racism. As Sifford himself reminded people in his interview with Plaschke, Augusta is the place whose former chairman, Clifford Roberts, once supposedly said that so long as he was in charge, the caddies would be black and the golfers white. (As somebody who was once a white caddie at Augusta—yes, there are some there now—I tend to dwell on this.) And, in thinking so, such people are able to reassure themselves we don’t live in such backward times anymore, and thank goodness, and all that.

And, you know, we don’t, which I was reminded of last week when I happened to be skimming the Chicago Reader to look for some restaurant or other. There, on the first page of their website, was an article written by the Reader’s Michael Miner entitled “The Fragile Legacy of Literary Journalism,” the first paragraph of which asks that some scholar, once “all better subjects have been exhausted,” write “a dissertation charting ‘The Evolution of Literary Fashion in the 20th Century Chicago Newsroom.’” The “key document of this saga,” Miner suggests, has already been unearthed: “a memo posted on the newsroom bulletin board [of the now-defunct Chicago Daily News] in the early 50s by crusty city editor Clem Lane.” It’s a name that might not mean much to you—unless you are a great deal older than you are telling your spouse and children, rascal—but apparently once did, at least in Chicago.

And maybe elsewhere too. He was apparently the archetypal city editor, of the sort already noted in the Ben Hecht and Charles MacArthur play The Front Page of 1928 (which later became the classic film, His Girl Friday). He was the sort of person after whom the old school, when it was first built, was named. He was also my father’s great-uncle.

The salient point here however isn’t going down that rabbit hole into Chicago history, and the rest, but rather the memo that Miner describes. Here is that memo, in shortened form: “Short words … short sentences … short leads … short paragraphs.” It amounts to a style that could be called “Chicago City Desk.” It’s the style that, very likely, is the one that got taught to you by some long-ago teacher of English, if you had one in whatever village you happen to have escaped from on some long-forgotten evening.

It’s also the style that most writers, whether in newspapers or anywhere else, have long since sought to get away from, whether it be—in the specific case of newspapers and journalism—the “New Journalism” as practiced by Tom Wolfe, Hunter S. Thompson, Truman Capote, Joan Didion, and the rest, or more generally American writing in toto. You could argue, for instance, that there’s a strong linkage between the style Clem directs his reporters to adopt and the style of Ernest Hemingway, that iceberg within American literature all newly-launched vessels have attempted, successfully or not, to direct themselves around since. There’s a reason David Foster Wallace (who wrote Infinite Jest, and if you haven’t read it well, you just aren’t cool), for example, writes in such a long-winded (and heavily-footnoted) manner: in part, at least, it’s a kind of Fuck you to all of those people who directed him to write in a short and easily-comprehensible way.

What Clem liked, in short—yes, yes—was things to be clear, well-defined, sharply-distinguishable. And, you could say, American literature—or more so, American writing in general, and hell, since American writing has, since World War II, been something of a standard for the world itself, maybe the world’s writing—ever since has been in revolt against that. And I don’t mean to spell things out too much for you, if you’ve already gotten the conceit thus far, but what could be said is that all of those writers in revolt against the “tyranny” of Chicago City Desk style are people who encourage you to view the world through saccadic movement rather than engaging your smooth pursuit system. They want to persuade you that the truth is off somewhere on the margins, out in the corner of your eye, that point that you see but you don’t see.

Maybe none of that matters except to Literate Americans. I don’t know. But here’s a recent sentence from Paul Krugman, the Nobel Prize-winning economist (and yes, the economics prize is not quite the same as winning one for Physics or Medicine or Literature) who moonlights as a columnist for the New York Times: “Whenever growing income disparities threaten to come into focus, a reliable set of defenders tries to bring back the blur.” Krugman trots out the usual suspects: seemingly-authoritative reports that, really, there isn’t an “income gap,” or other newspaper columnists (meaning, mostly, David Brooks) who say that it isn’t, really, an “income gap” so much as it is an “education gap,” and so on. But the data, Krugman says—and an increasing chorus of experts backs him up—doesn’t lie: the wealth of the richest of the richest Americans, like that member at Medinah, just keeps shooting up. And up. In a steadily-ascending curve.

Like a smoothly-struck tee shot.

A Tiger’s Cold July

Claudio:  Disloyal?
Much Ado About Nothing III, 2

We were driving in to Medinah this morning at our usual time, six, when Scott, who’s twenty-six, started in on the recent death of the singer Amy Winehouse. “Twenty-seven,” he kept saying, going on to repeat the Internet story (now with its own Wikipedia entry!) about the “27 Club” or the “Curse of 27.” “Spooky, huh?” he wanted to know. I replied that I did not find it “spooky,” at all—that, in fact, I find it rather more interesting that more celebrities don’t die at that age than actually do. I call this the “Celebrity Death Theory,” and while at first it may appear rather far from golf, I think it actually sheds a certain light on Tiger Woods’ recent firing of caddie Steve Williams.

My theory is that celebrities sometimes die at that age because it is part of the life-cycle of the celebrity, considered in the same way that one might look at some interesting species of beetle. Consider: people who become huge, international stars often become that at an early age, sometimes while still in their teens or early twenties. This is because the industries that produce “stars,” the entertainment industries, needs young people: first, because people usually look their best at that age (some people, sure, age into their bodies and so on, but for the most part); and second because the greatest consumers of the entertainment industry’s product are people of that same age.

Still, why should celebrities die at the age of twenty-seven—when, if they achieved their status at some earlier age, they ought to be at their peak of fortune and fame? This, I think, is actually the sort of “puzzle” that resolves itself if you happen to think about it. Twenty-seven is, in reality, a natural age for celebrities to die. Consider: Hendrix and Morrison and Joplin and Cobain (to name four of the people associated with the phenomena) had all had their international breakthroughs at early ages: as noted, the entertainment industry has a constant need for young, fresh faces, so naturally there is a constant stream of newly-famous in their early twenties or even younger.

But after that first flush of success, where are you? In the music industry, at least—and other places—there’s a phenomena known as the “sophomore slump,” which describes what happens when your next album (or book or movie or whatever) isn’t quite the same international smash as your first one. Of course, since we’re talking about the sorts of products that have massively huge success—Nirvana’s Nevermind, for instance—there’s a certain argument that nobody could possibly follow that up with something even more successful: this is what sabermatricians, the stat-heads of baseball, call “regression toward the mean”—it was virtually certain, for example, that Barry Bonds would hit less than 70 home runs the season following his record-setting year, even aside from the possibility that he cut back on his steroids intake to avoid detection.

A lot of things have to happen to turn somebody from just another recent college graduate—what Morrison was in the summer of 1965—into the sort of star that everyone knew two years later, after the release of the Doors’ self-titled first studio album in the winter of 1967. Most of them, of course, are outside of the control of the “artist,” or worker in the entertainment industry: “fame” is what happens when the particular concerns of one person suddenly are also the concerns of great numbers of other people. That’s what the German philosophers meant when they referred to the “Zeitgeist,” which Shelley (himself one of the first beneficiaries of the process) translated as the “spirit of the age.” In German, it all sounds mysterious and exotic, but what they meant was something far more down-to-earth and prosaic, even if that’s difficult to recall now.

Yet what happens after that first wave of international pop stardom is over, after your name stops being what’s listed in the apartment directory and starts being the last thing the announcer says before you take the stage? Well, for a lot of people, it might be time to take stock, and figure out what to do next. That’s apparently what Morrison was doing in Paris in the summer of 1971—escaping to a city where his name was not a household one, in much the same way that Michael Jordan used to escape to Europe during the NBA’s offseason.

Twenty-seven, in other words, is about the age a person might suddenly have some time on their hands, if they became famous at the age of twenty-one or so. At loose ends, so to speak. Back to Black, the album that made Amy Winehouse famous—and won her five Grammy awards, more than any previous English musician—was released in 2007, nearly five years ago. She had not released an album since then; there’d been a few singles, and just prior to her death there was a twelve-city European tour (the latter dates of which were cancelled due to the catch-all diagnosis of “exhaustion”), but there doesn’t seem to have been an album due anytime soon—certainly not something to compete with Back to Black.

These deaths at the age of twenty-seven, in short, are perhaps part of the life-cycle of the species, not something anomalous or strange. There ought to be a certain amount of casualties from celebrity status—which, after all, is a life-threatening condition, as we have reason to know now—and so there are, if one examines the record with a cold eye. Which brings us around to Tiger Woods and Steve Williams, which happened two weeks ago and, I would say, is equally part of the life-cycle of the famous person. Part of that cycle, we might suppose, is the increasingly-narrow circle of people that the celebrity can trust: a circle that, in Tiger’s case, was rather small to begin with and has become increasingly more so in the past several years.

Woods fired his long-time caddie, Steve Williams, at the AT&T National at Aronimink over the weekend of the Fourth of July, apparently because Williams showed up at the tournament with Adam Scott, the Australian golfer Williams first worked for this year at the U.S. Open. Woods was under the impression that the U.S. Open loop at Congressional for Scott was a one-off, and took Williams’ appearance with Scott at Aronimink as a sign of “disloyalty”—never mind that Williams has not only defended Tiger on the golf course (like the time he threw a photographer’s camera in a pond), but also in the press since Tiger’s unfortunate Thanksgiving. Together with how Tiger fired his first looper on tour, Mike “Fluff” Cowan—Cowan had the temerity to appear in commercials—this latest dismissal paints a picture of a man with a particularly narrow interpretation of loyalty.

It’s just to escape such an increasingly tighter circle, of course, that led Jim Morrison, and later Michael Jordan, to Europe, because there they could walk down the street without being mobbed as they might be in the United States. It’s one reason that smaller-scale “celebrities” are drawn to New York or Los Angeles: in cities like that, a mere former governor of Arkansas, say, is lost in the crowd. So what we might say is that “celebrity” status has its own forces at work that are, in effect, much like the forces that draw the salmon back to the same stream year after year, or that move whales from Alaska to Hawaii with the seasons: as the web of international media attention grows tighter, the “celebrity” realizes that success is also a prison with bars of gold instead of steel. Inevitably, as all semblance of a “normal” life is shed, that has an effect on the psyche: innumerable arrest records can attest to the reactions of Winehouse et al. to the price of fame.

Human beings are, after all, social animals: we have to have contact with other human beings in order to stay sane, as studies of the effect of solitary confinement on prison inmates have shown. So what we might think is that there is some algorithm to human behavior whereby increasing celebrity leads to increasing distance from what we like to term the “real world,” a distance that can be measured in altercations and prison sentences, divorces and “rehab treatments.” As that distance increases so too does the number of people with whom one might have a standard kind of human relation, as Woods apparently had with Williams (who had previously worked for another number one player in the world, Greg Norman, and so was presumably inured to celebrity status), and maybe that means that a person will spend an increasing amount of time thinking about those few relationships still left.

It’s reasonably well-known, for example, that the rest of Tiger’s family—his half-brothers and sisters, and their children—have barely had any contact with him in years, despite the fact that one of his nieces, Cheyenne Woods, is following in her uncle’s footsteps in golf: she’s won more than 30 amateur tournaments. But Tiger cut his ties with those relations shortly after his father’s death, as CBS newsmagazine 60 Minutes described it in a story some years ago. Now these same forces have, we might say, acted to show Steve Williams the door. One can only wonder whether Tiger realizes how much his actions are being dictated to him, who’s always prided himself on his own self-mastery.

The Jazz of Iwo Jima

“Do you like a 9 or a wedge,” my golfer asked me. We stood about 140 yards to the center of the green at the ninth hole of Chicago Highlands, facing downwind to a flag roughly towards the front. He was a good player; the yardage and the wind indicated that either of those clubs were possible. I pretended to think a moment—I’d already been rehearsing what to say—then replied, “I think it’s a 7.” My golfer looked back at me without saying anything for a moment, then said, “But we’re downwind.” “I know,” I said, “that’s why.” There was another silence.

It may be a bit late to jump on the bandwagon, but Golf Digest’s architecture editor, Ron Whitten, named Chicago Highland’s 9th the “Hole of the Year” for 2010. The title may not be the most euphonic, but the ninth is a golf hole—a “giant chocolate drop of a hole” according to Whitten—that has, to one degree or another, been an enigma to the golfers I’ve worked for this year. It’s a hole I’ve mentioned here, though only in passing, despite the fact that it has been the center of nearly every discussion provoked by the question (“what do you think of the course?”) that I try to ask every one of my golfers.

At most courses I’ve worked at, the better the golfer the more nearly the opinions tend to converge—an interesting phenomenon, that—but at the Highlands, and especially as concerns the 9th, opinions have both converged and yet spread during each conversation, particularly among the better players. Although many of the same points are raised, the golfers I’ve talked to have become less, rather than more, uncertain about their own minds when discussing the golf course. That’s unusual.

To those who don’t know it, the hole is this: the highest point on the golf course, surrounded on all sides by fairway—it’s not for nothing that, as Whitten says, the hole has been compared to Mt. Suribachi on Iwo Jima, setting for the famous photograph of U.S. Marines raising the flag there. The only feature to the hole besides the hill is a small pot bunker about twenty yards directly in front of the green. The bunker is the single obstacle; there is no water, nor even any rough really. Naturally, it’s difficult to avoid looking at that bunker, the only bunker, from the tee.

Also naturally, the bunker—reminiscent, so far as I can tell, of the “D.A.” at Pine Valley’s 10th—is deep, so deep that merely escaping it requires effort. The green, only 20 yards away, is just as naturally unreachable for most—though I did see what may have been the first birdie by someone who’s tee shot found the bunker. Yet curiously, the bunker is for most purposes a smokescreen to the hole’s real challenges, which are more hidden and cerebral. And, in fact, have little to do with the tee shot, and the bunker’s devilish prominence for that tee shot, at all—a fact that makes the hole’s outward similarity to Mt. Suribachi a more-than-casual comparison.

The name of the John Wayne film that memorialized the Battle of Iwo Jima—The Sands of Iwo Jima—actually disguises the reality of the battle: the word “Sands” induces thoughts of beaches, so that one imagines the difficulty of the battle was merely landing on the island at all. The word “sands” conjures the nightmares of Omaha Beach on D-Day during the invasion of Normandy—but the strategy of the Japanese at Iwo Jima had nothing to do with stopping the invasion at the waterline. Instead, the Japanese depended on what military tacticians call “defense in-depth”; constructing a series of hidden tunnels and bombproof shelters, the Japanese proposed to draw the Americans into a meat-grinder that would delay the march towards Tokyo. The “sands” of Iwo Jima, that is, were a mirage: the danger lay not at the high-tide mark on the beach but hidden beyond it. What looked, to the Americans, like a relatively simple conquest once a few speed-bumps were driven over, would become ever-more consuming …

In the same way, the “beach” on the ninth is also a mirage: the real problem posed by the design has to do with the green, which is tiny and extremely bumpy, with little dips and hollows scattered about. The question the golfer has to ask is, do you come at the green low, or high? In other words, do you try to land a high, spinning shot directly on top of the pin—with the risk that a spinning shot might come all the way back down the slope? Or do you try to hop the ball in front of the green, hoping that it comes to rest somewhere close to the pin—with the risk that the ball may not stop, may just keep rolling right off the green down the slope on the other side?

I’ve guessed wrong as often as I’ve been right about what shot to play—every shot is different, I suppose, depending as it does not only on the conditions (particularly the wind) but also on the golfer—but the real point of interest to me is that there is a difference at all. That is, most times for approach shots the difference is a comparatively trivial question of which iron to play (9-iron or 8-iron?), while the approach to the ninth actually demands that the golfer think of what shot to play, only then leading to the question of which club. That is a kind of thinking that, I’d say, most golfers in America have never really faced in their entire careers.

Almost all approach shots on American golf courses, in other words, ask for precisely the same thing: usually a high-flying ball that lands and stops somewhere near the pin. Asking for something else, then, often will send American golfers into a kind of catatonia or paralysis; the low-running shot is just simply not in their bags. A number of times, while looping the ninth, I’ve mentioned the possibility of trying to run a shot into the green and been met with blank stares, or even outright dismissal, because the golfer cannot comprehend what it is that I’m saying.

Even good golfers can be capable of this, which is perhaps why a number of very solid players I’ve worked for have been disparaging of the ninth. Some of them have called the hole an example of “goofy golf,” a term that has come into vogue as a means of rejecting different forms of architecture. But some, after denigrating the hole, have also come—after some discussion, at times—to find some merit in the hole. Perhaps, after thinking about it, they come to realize that the initial way they played the hole was not the only way to play it; that, if they had played it some other way, they might have had some success. And that, for some golfers, is unusual: good golfers, after all, have found some means of being successful most of the time; for a hole to cause them to re-evaluate not their execution but their strategy is something rare.

“As one who has steadfastly insisted he’d seen it all in golf design, I humbly beg for a mulligan,” says Whitten. He is, I would say, paying tribute to that aspect of the ninth, something that’s exceedingly rare in golf. “There is no hip-hop, rap, or even jazz in golf architecture; it’s all Stephen Foster and John Phillips Sousa,” as Whitten has complained elsewhere. In other words, most golf architecture is merely the slavish imitation of the past: the “Redan” hole, the “Cape” hole, the “Biarritz” green; all holes first designed nearly a century or more ago. That isn’t to say that such holes are bad, of course—there’s a reason they’ve been copied, which incidentally is mostly because those styles of holes offer a choice in how to play them—but Whitten’s point is that there is very little in the way of new thinking in golf architecture.

The ninth, I think, offers a way around that. Driven by equipment changes, the setups on Tour and elsewhere have worked towards narrower fairways to offset the tremendous jump in driving distances: the ninth has one of the widest fairways I’ve ever seen. Driving distances have increased so much that classic courses are constantly pushing back their tee boxes: the ninth plays under 300 yards most days. The USGA and the R & A have worked to limit the amount of spin pros can get from their wedges: a spinning wedge shot to the ninth leads almost inevitably to a ball that drops off the front edge of the green. Maybe, I’d suggest, the ninth isn’t “goofy golf” at all; maybe it is, instead, “golfy golf.” Or, maybe, just golf.

My golfer, in the end, hit a low eight-iron that landed in the front of the green—and rolled to the back, a long way from the hole. He ended up three-putting for bogey; not a particular surprise on a green so dominated by rolls and mounds. But on the other hand, as he said later, at least he didn’t double—a speed-bump, not a quagmire.