Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

A Part of the Main

… every man is a peece of the Continent, a part of the maine
—John Donne, Devotions Upon Emergent Occasions

The “natural selection pressures that drive evolution can flip-flop faster than previously thought,” reported the Kansas City Star, six years ago, on a study of Bahamanian lizards. The details are, as always, not nearly as interesting as the newspaper writers make them appear: they involve percentages of as little as two and three percent. But the scientists found them significant, and the larger point remains: Darwin “thought that evolution must occur slowly and gradually,” but actual observed nature doesn’t demonstrate that. Which is to say that change, when it comes, can come suddenly and unexpectedly—something that may hold as equally well for sports, say, as lizards. Like golf, perhaps.

If I were to tell you, for instance, that while seven percent of all white people earning less than $50,000 dollars a year participated in a particular something in 2009, nineteen percent of all white people earning more than $125,000 a year did, one plausible suspect for the role of the particular something might be the Republican Party. After all, Mitt Romney’s strategy to win the presidency this November involved capturing 61 percent of the white vote, according to an unnamed source quoted in the National Journal this past August. But that guess would be wrong: the “particular something” is a round of golf.

Surely it takes no great seer to tell us that if one partner in this twosome is in trouble, the other ought to be looking for a lawyer. Golf has found its numbers to be relatively static: back in 2008, the New York Times ran a story on the “disappearance of golfers.” One expert quoted in the story said that while the “man on the street will tell you that golf is booming because he sees Tiger Woods on TV … the reality is, while we haven’t exactly tanked, the numbers have been disappointing for some time.” Golfers are overwhelmingly whiter and wealthier than their fellow Americans just as Republican voters are, which is to say that, like the Republican party, golf needs to ask whether being whiter and wealthier (and, though I haven’t mentioned it, older) are necessary—or contingent—parts of their identities.

The answer to that question will likely determine the survival of each. “If demographics is destiny, the Republican party has a rendezvous with irrelevance” coming, as one journalist has put the point—and golf, one assumes, faces much the same issue. Still, it seems likely that golf has at least, if not a better, chance of survival than the Republican party: it was already long in existence when the Republican party was born.

I’m actually being facetious there—obviously, anything so important as golf will outlive a mere political party, the transient accumulations of various interests. The question thusly isn’t so much the end, but rather the means: the path whereby golf might succeed. And there, it may be, lies a tale.

The roots of that tale might lie with the work of a doctor named Ann McKee. She works at the Veteran’s Hospital in Bedford, Massachussetts, and it has become part of her job over the past decade to examine the brains of dead football players and other people who may have been exposed to repeated concussions over the course of their lives. She’s become expert in diagnosing—after death, which is the only time it can be diagnosed—a condition known as chronic traumatic encephalopathy, or C.T.E. What’s she’s found, however, is that there are more dangerous things than concussions.

What Dr. McKee’s work has shown, that is, is that while concussions are horrible injuries it’s really the repeated, low-level jarrings that an activity like football can cause the brain that seems to cause C.T.E., a disease that mimics Alzheimer’s in many ways, including a final descent into dementia. And what it’s meant, at least for the doctor, is that she’s found an answer to this question: if her son “had a chance to join the NFL,” Malcolm Gladwell of the New Yorker asked her, “what would she advise him?” And here is what the doctor said: “‘Don’t. Not if you want to have a life after football.’”

“And therefore never send to know for whom the bell tolls,” wrote John Donne four centuries ago: “It tolls for thee.” Dr. McKee’s reply to Gladwell’s question may be just such a church bell tolling in the night: at the least, it is the link between the NFL and those lizards sunning themselves in the Bahamas. For when the mothers of America begin to hear it, and what it might mean for their sons (and possibly their daughters), it may provoke something of a sea change among the behavior of Americans. Like the change in the lizards, it may come suddenly, and not gradually. One day, there just won’t be anybody at the stadium any more.

If that does happen, it seems absurd to think that Americans will abandon sport entirely. Baseball, one expects, would see a huge surge in popularity that would overtake even that wave during the steriod era. Basketball, obviously, would become even more popular than it already is. And, perhaps, just a bit of interest would run over golf’s way. Golf, in other words, unlike the Republican Party, may be on the cusp of a new boom. What seems improbable, in short, can quickly come to seem inevitable.

And so, since it may be that entire societies can, at times, be swept by vast tides that completely overcome that which came before, so too can obscure blog posts in the wilderness called the Internet be swung suddenly from what might appear to be their ostensible subjects. Which might be of some comfort to those who observe the completely evitable tragedies like the one last week in Connecticut, and wonder if, or ever, the United States will decide to do something about its ridiculous gun laws.