Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.

To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

The Weight We Must Obey

The weight of this sad time we must obey,
Speak what we feel, not what we ought to say.
King Lear V,iii

There’s a scene in the film Caddyshack that at first glance seems like a mere throwaway one-liner, but that rather neatly sums up what I’m going to call the “Kirby Puckett” problem. Ted Knight’s Judge Smails character asks Chevy Chase’s Ty Webb character about how if Webb doesn’t, as he claims, keep score, then how does he measure himself against other golfers? “By height,” Webb replies. It’s a witty enough reply on its own of course. But it also (and perhaps there’s a greater humor to be found here) raises a rather profound question: is there a way to know someone is a great athlete—aside from their production on the field? Or, to put the point another way, what do bodies tell us?

I call this the “Kirby Puckett” problem because of something Bill James, the noted sabermetrician and former , once wrote in his New Historical Baseball Abstract: “Kirby Puckett,” James observed, “once said that his fantasy was to have a body like Glenn Braggs’.” Never heard of Glenn Braggs? Well, that’s James’ point: Glenn Braggs looked like a great ballplayer—“slender, fast, very graceful”—but Kirby Puckett was a great ballplayer: a first-ballot Hall of Famer, in fact. Yet despite his own greatness—and surely Kirby Puckett was aware he was, by any measure, a better player than Glenn Braggs—Puckett could not help but wish he appeared “more like” the great player he, in reality, was.

What we can conclude from this is that a) we all (or most of us) have an idea of what athletes look like, and b) that it’s extremely disturbing when that idea is called into question, even when you yourself are a great athlete.
This isn’t a new problem, to be sure. It’s the subject, for instance, of Moneyball, the book (and the movie) about how the Oakland A’s, and particularly their general manager Billy Beane, began to apply statistical analysis to baseball. “Some scouts,” wrote Michael Lewis in that book, about the difference between the A’s old and the new ways of doing things, “still believed they could tell by the structure of a young man’s face not only his character but his future in pro ball.” What Moneyball is about is how Beane and his staff learned to ignore what their eyes told them, and judge their players solely on the numbers.

Or in other words, to predict future production only by past production, instead of by what appearances appeared to promise. Now, fairly obviously that doesn’t mean that coaches and general managers of every sport need to ignore their players’ appearances when evaluating their future value. Indisputably, many different sports have an ideal body. Jockeys, of course, are small men, whereas football players are large ones. Basketball players are large, too, but in a different way: taller and not as bulky. Runners and bicyclists have yet a different shape. Pretty clearly, completely ignoring those factors would lead any talent judge far astray quickly.

Still, the variety of successful body types in a given sport might be broader than we might imagine—and that variety might be broader yet depending on the sport in question. Golf for example might be a sport with a particularly broad range of potentially successful bodies. Roughly speaking, golfers of almost any body type have been major champions.

“Bantam” Ben Hogan for example, greatest of ballstrikers, stood 5’7” and weighed about 135 pounds during his prime, and going farther back Harry Vardon, who invented the grip used almost universally today and won the British Open six times, stood 5’9” and weighed about 155 pounds. But alternately, Jack Nicklaus was known as “Fat Jack” when he first came out on tour—a nickname that tells its own story—and long before then Harry Vardon had competed against Ted Ray, who won two majors of his own (the 1912 British and the 1920 U.S. Opens)—and was described by his contemporaries as “hefty.” This is not even to bring up, say, John Daly.

The mere existence of John Daly, however, isn’t strong enough to expand our idea of what constitutes an athlete’s body. Golfers like Daly and the rest don’t suggest that the overweight can be surprisingly athletic; instead, they provoke the question of whether golf is a sport at all. “Is Tiger Woods proof that golf is a sport, or is John Daly confirmation to the contrary?” asks a post on Popular Science’s website entitled “Is Golf a Sport?” There’s even a Facebook page entitled “Golf Is Not a Sport.”

Facebook pages like the above confirm just how difficult it is to overcome our idealized notions of what athletes are. It’s to the point that if somebody, no matter how skillful his efforts, doesn’t appear athletic, then we are more likely to narrow our definition of athletic acts rather than expand our definition of athletic bodies. Thus, Kirby Puckett had trouble thinking of himself as an athlete, despite that he excelled in a sport that virtually anyone will define as one.

Where that conclusion could (and, to some minds, should) lead us is to the notion that a great deal of what we think of as “natural” is, in fact, “cultural”—that favorite thesis of the academic Left in the United States, the American liberal arts professors proclaiming the good news that culture trumps nature. One particular subspecies of the gens is the supposedly expanding (aaannnddd rimshot) field called by its proponents “Fat Studies,” which (according to Elizabeth Kolbert of The New Yorker) holds that “weight is not a dietary issue but a political one.” What these academics think, in other words, is that we are too much the captives of our own ideas of what constitutes a proper body.

In a narrow (or, anti-wide) sense, that is true: even Kirby Puckett was surprised that he, Kirby Puckett, could do Kirby Puckett-like things while looking like Kirby Puckett. To the academics involved in “Fat Studies” his reaction might be a sign of “fatphobia, the fear and hatred of fatness and fat people.” It’s the view of Kirby Puckett, that is, as self-hater; one researcher, it seems, has compared “fat prejudice … to anti-semitism.” In “a social context in which fat hatred is endemic,” this line of thinking might go, even people who achieve great success with the bodies they have can’t imagine that success without the bodies that culture tells them ought to be attached to it.

What this line of work might then lead us to is the conclusion that the physical dimensions of a player matter very little. That would make the success of each athlete largely independent (or not) of physical advantage—and thereby demonstrate that thousands of coaches everywhere would, at least in golf, be able to justify asserting that success is due to the “will to succeed” rather than a random roll of the genetic dice. It might mean that nations looking (in expectation perhaps of the next Summer Olympics, where golf will be a medal sport) to achieve success in golf—like, for instance, the Scandinavian nations whose youth athletics programs groom golfers, or nations like Russia or China with a large population but next to no national golf tradition—should look for young people with particular psychological characteristics rather than particular physical ones.

Yet whereas “Fat Studies” or the like might focus on Kirby Puckett’s self-image, Bill James instead focuses on Kirby Puckett’s body: the question James asks isn’t whether Puckett played well despite his bad self-image, bur rather whether Puckett played well because he actually had a good body for baseball. James asks whether “short, powerful, funny-looking kind of guy[s]” actually have an advantage when it comes to baseball, rather than the assumed advantage of height that naturally allows for a faster bat speed, among the other supposed advantages of height. “Long arms,” James speculates, “really do not help you when you’re hitting; short arms work better.” Maybe, in fact, “[c]ompressed power is more effective than diffuse power,” and James goes on to name a dozen or more baseball stars who all were built something like Honus Wagner, who stood 5’11” and weighed 200 pounds. Which, as it happens, was also about the stat line for Jack Nicklaus in his prime.

So too, as it happens, do a number of other golfers. For years the average height of a PGA Tour player was usually said to be 5’9”; these days, due to players like Dustin Johnson, that stat is most often said to be about 5’11”. Still—as remarked by the website Golf Today—“very tall yet successful golfers are a rarity.”I don’t have the Shotlink data—which has a record of every shot hit by a player on the PGA Tour since 2003—to support the idea that certain-sized guys of one sort or another had the natural advantage, though today it’s possible that it could easily be obtained. What’s interesting about even asking the question, however, is that it is a much-better-than-merely-theoretically-solvable problem—which significantly distinguishes it from that of the question that might be framed around our notions of what constitutes an athletic body, as might be done by the scholars of “Fat Studies.”

Even aside from the narrow issue of allocating athletic resources, however, there’s reason for distrusting those scholars. It’s true, to be sure, that Kirby Puckett’s reaction to being Kirby Puckett might lend some basis for thinking that a critical view of our notions of what bodies are is salutary in an age where our notions of what bodies are and should be are—to add to an already-frothy mix of elements—increasingly driven by an advertising industry that, in the guise of either actors or models, endlessly seeks the most attractive bodies.

It would easier to absorb such warnings, however, were there not evidence that obesity is not remaining constant, but rather a, so to say, growing problem. As Kolbert reports, the federal government’s Centers for Disease Control, which has for decades done measurements of American health, found that whereas in the early 1960s a quarter of Americans were overweight, now more than third are. And in 1994, their results got written up in the Journal of American Medicine: “If this was about tuberculosis,” Kolbert reports about one researcher, “it would be called an epidemic.” Over the decade previous to that report Americans had, collectively, gained over a billion pounds.

Even if “the fat … are subject to prejudice and even cruelty,” in other words, that doesn’t mean that being that way doesn’t pose serious health risks both for the individual and for society as a whole. The extra weight carried by Americans, Kolbert for instance observes, “costs the airlines a quarter of a billion dollars’ worth of jet fuel annually,” and this isn’t to speak of the long-term health care costs that attach themselves to the public pocketbook in nearly unimaginable ways. (Kolbert notes that, for example, doors to public buildings are now built to be fifteen, instead of twelve, feet wide.)

“Fat Studies” researchers might claim in other words, as Kolbert says, that by shattering our expectations of what a body ought to be so thoroughly fat people (they insist on the term, it seems) can shift from being “revolting … agents of abhorrence and disgust” to “‘revolting’ in a different way … in terms of overthrowing authority, rebelling, protesting, and rejecting.” They might insist that “corpulence carries a whole new weight [sic] as a subversive cultural practice.” In “contrast to the field’s claims about itself,” says Kolbert however, “fat studies ends up taking some remarkably conservative positions,” in part because it “effectively allies itself with McDonald’s and the rest of the processed-food industry, while opposing the sorts of groups that advocate better school-lunch programs and more public parks.” In taking such an extreme position, in short, “Fat Studies” ends up only strengthening the most reactionary policy tendencies.

As, logically speaking, it must. “To claim that some people are just meant to be fat is not quite the same as arguing that some people are just meant to be poor,” Kolbert observes, “but it comes uncomfortably close.” Similarly, to argue that our image of a successfully athletic body is tyrannical can, if not done carefully, be little different from the fanatical coach who insists that determination is the only thing separating his charges from championships. Maybe it’s true that success in golf, and other sports, is largely a matter of “will”—but if it is, wouldn’t it be better to be able to prove it? If it isn’t, though, that would certainly enable a more rational distribution of effort all the way around: from the players themselves (who might thereby seek another sport at an earlier age) to recruiters, from national sporting agencies to American universities, who would then know what they sought. Maybe, in other words, measuring golfers by height isn’t so ridiculous at all.