To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

Advertisements

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.

Several And A Single Place

 

What’s the matter,
That in these several places of the city
You cry against the noble senate?
Coriolanus 

 

The explanation, says labor lawyer Thomas Geoghegan, possesses amazing properties: he can, the one-time congressional candidate says, “use it to explain everything … because it seems to work on any issue.” But before trotting out what that explanation is, let me select an issue that might appear difficult to explain: gun control, and more specifically just why, as Christopher Ingraham of the Washington Post wrote in July, “it’s never the right time to discuss gun control.” “In recent years,” as Ingraham says, “politicians and commentators from across the political spectrum have responded to mass shootings with an invocation of the phrase ‘now is not the time,’ or a close variant.” That inability even to discuss gun control is a tremendously depressing fact, at least insofar as you have sympathy for the needless waste of lives gun deaths are—until you realize that we Americans have been here before. And that demonstrates, just maybe, that Thomas Geoghegan has a point.

Over a century and a half ago, Americans were facing another issue that, in the words of one commentator, “must not be discussed at all.” It was so grave an issue, in fact, that very many Americans found “fault with those who denounce it”—a position that this commenter found odd: “You say that you think [it] is wrong,” he observed, “but you denounce all attempts to restrain it.” That’s a pretty strange position, because who thinks something is wrong, but yet is “not willing to deal with [it] as a wrong?” What other subject could be called a wrong, but should not be called “wrong in politics because that is bringing morality into politics,” and conversely should not be called “wrong in the pulpit because that is bringing politics into religion.” To sum up, this commenter said, “there is no single place, according to you, where this wrong thing can properly be called wrong!”

The place where this was said was New Haven, Connecticut; the time, March of 1860; the speaker, a failed senatorial candidate now running for president for a brand-new political party. His name was Abraham Lincoln.

He was talking about slavery.

*                                            *                                        *

To many historians these days, much about American history can be explained by the fact that, as historian Leonard Richards of the University of Massachusetts put it in his 2000 book, The Slave Power: The Free North and Southern Domination, 1780-1860, so “long as there was an equal number of slave and free states”—which was more or less official American policy until the Civil War—“the South needed just one Northern vote to be an effective majority in the Senate.” That meant that controlling “the Senate, therefore, was child’s play for southern leaders,” and so “time and again a bill threatening the South [i.e., slavery above all else] made its way through the House only to be blocked in the Senate.” It’s a stunningly obvious point, at least in retrospect—at least for this reader—but I’d wager that few, if any, Americans have really thought through the consequences of this fact.

Geoghegan for example has noted that—as he put it in 1998’s The Secret Lives of Citizens: Pursuing the Promise of American Life—even today the Senate makes it exceedingly difficult to pass legislation: as he wrote, at present only “two-fifths of the Senate, or forty-one senators, can block any bill.” That is, it takes at least sixty senatorial votes to overcome the threat known as the “filibuster,” the invocation of which requires a supermajority to overcome. The filibuster however is not the only anti-majoritarian feature of the Senate, which is also equipped with such quaint customs as the “secret hold” and the quorum call and so forth, each of which can be used to delay a bill’s hearing—and so buy time to squelch potential legislation. Yet, these radically disproportionate senatorial powers merely mask the basic proportionate inequality at the heart of the Senate as an institution itself.

As political scientists Frances Lee and Bruce Oppenheimer point out in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, the Senate is, because it makes small states the equal of large ones, “the most malapportioned legislature in the democratic world.” As Geoghegan has put the point, “the Senate depart[s] too much from one person, one vote,” because (as of the late 1990s) “90 percent of the population base as represented in the Senate could vote yes, and the bill would still lose.” Although Geoghegan wrote that nearly two decades ago, that is still largely true today: in 2013, Dylan Matthews of The Washington Post observed that while the “smallest 20 states amount to 11.27 percent of the U.S. population,” their senators “can successfully filibuster [i.e., block] legislation.” Thus, although the Senate is merely one antidemocratic feature of the U.S. Constitution, it’s an especially egregious one that, by itself, largely prevented a serious discussion of slavery in the years before the Civil War—and today prevents the serious discussion of gun control.

The headline of John Bresnahan’s 2013 article in Politico about the response to the Sandy Hook massacre, for example, was “Gun control hits brick wall in Senate.” Bresnahan quoted Nevadan Harry Reid, the Senate Majority Leader at the time, as saying that “the overwhelming number of Senate Republicans—and that is a gross understatement—are ignoring the voices of 90 percent of the American people.” The final vote was 54-46: in other words, the majority of the Senate was in favor of controls, but because the pro-control senators did not have a supermajority, the measure failed. In short, the measure was a near-perfect illustration of how the Senate can kill a measure that 90 percent of Americans favor.

And you know? Whatever you think about gun control, as an issue, if 90 percent of Americans want something, and what prevents them is not just a silly rule—but the same rule that protected slavery—well then, as Abraham Lincoln might tell us, that’s a problem.

It’s a problem because far from the Senate being—as George Washington supposedly said to Thomas Jefferson—the saucer that cools off politics, it’s actually a pressure cooker that exacerbates issues, rather than working them out. Imagine, say, had the South not had the Senate to protect its “peculiar institution” in the years leading to the Civil War: gradually, immigration to the North would have slowly turned the tide in Congress, which may have led to a series of small pieces of legislation that, eventually, would have abolished slavery.

Perhaps that may not have been a good thing: Ta Nehisi Coates, of The Atlantic, has written that every time he thinks of the 600,000-plus deaths that occurred as a result of the Civil War, he feels “positively fucking giddy.” That may sound horrible to some, of course, but there is something to the notion of “redemptive violence” when it comes to that war; Coates for instance cites the contemporary remarks of Private Thomas Strother, United States Colored Troops, in the Christian Recorder, the 19th century paper of the African Methodist Episcopal Church:

To suppose that slavery, the accursed thing, could be abolished peacefully and laid aside innocently, after having plundered cradles, separated husbands and wives, parents and children; and after having starved to death, worked to death, whipped to death, run to death, burned to death, lied to death, kicked and cuffed to death, and grieved to death; and worst of all, after having made prostitutes of a majority of the best women of a whole nation of people … would be the greatest ignorance under the sun.

“Were I not the descendant of slaves, if I did not owe the invention of my modern self to a bloody war,” Coates continues, “perhaps I’d write differently.” Maybe in some cosmic sense Coates is wrong, and violence is always wrong—but I don’t think I’m in a position to judge, particularly since I, as in part the descendant of Irish men and women in America, am aware that the Irish themselves may have codified that sort of “blood sacrifice theory” in the General Post Office of Dublin during Easter Week of 1916.

Whatever you think of that, there is certainly something to the idea that, because slaves were the single biggest asset in the entire United States in 1860, there was little chance the South would have agreed to end slavery without a fight. As historian Steven Deyle has noted in his Carry Me Back: The Domestic Slave Trade in American Life, the value of American slaves in 1860 was “equal to about seven times the total value of all currency in circulation in the country, three times the value of the entire livestock population, twelve times the value of the entire U.S. cotton crop and forty-eight times the total expenditure of the federal government”—certainly a value much more than it takes to start a war. But then had slavery not had, in effect, government protection during those antebellum years, it’s questionable whether slaves ever might have become such valuable commodities in the first place.

Far from “cooling” things off, in other words, it’s entirely likely that the U.S. Senate, and other anti-majoritarian features of the U.S. Constitution, actually act to enflame controversy. By ensuring that one side does not need to come to the bargaining table, in fact, all such oddities merely postpone—they do not prevent—the day of reckoning. They  build up fuel, ensuring that when the day finally arrives, it is all the more terrible. Or, to put it in the words of an old American song: these American constitutional idiosyncrasies merely trample “out the vintage where the grapes of wrath are stored.”

That truth, it seems, marches on.

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.

The Smell of Victory

To see what is in front of one’s nose needs a constant struggle.
George Orwell. “In Front of Your Nose”
    Tribune, 22 March 1946

 

Who says country clubs are irony-free? When I walked into Medinah Country Club’s caddie shack on the first day of the big member-guest tournament, the Medinah Classic, Caddyshack, that vicious class-based satire of country club stupid was on the television. These days, far from being patterned after Caddyshack’s Judge Smails (a pompous blowhard), most country club members are capable of reciting the lines of the movie nearly verbatim. Not only that—they’ve internalized the central message of the film, the one indicated by the “snobs against the slobs” tagline on the movie poster: the moral that, as another 1970s cinematic feat put it, the way to proceed through life is to “trust your feelings.” Like a lot of films of the 1970s—Animal House, written by the same team, is another example—Caddyshack’s basic idea is don’t trust rationality: i.e., “the Man.” Yet, as the phenomena of country club members who’ve memorized Caddyshack demonstrates, that signification has now become so utterly conventional that even the Man doesn’t trust the Man’s methods—which is how, just like O.J. Simpson’s jury, the contestants in this year’s Medinah Classic were prepared to ignore probabilistic evidence that somebody was getting away with murder.

That’s a pretty abrupt jump-cut in style, to be sure, particularly in regards to a sensitive subject like spousal abuse and murder. Yet, to get caught up in the (admittedly horrific) details of the Simpson case is to miss the trees for the forest—at least according to a short 2010 piece in the New York Times entitled “Chances Are,” by the Schurman Professor of Applied Mathematics at Cornell University, Steven Strogatz.

The professor begins by observing that the prosecution spent the first ten days of the six-month long trial establishing that O.J. Simpson abused his wife, Nicole. From there, as Strogatz says, prosecutors like Marcia Clark and Christopher Darden introduced statistical evidence that showed that abused women who are murdered are usually killed by their abusers. Thus, as Strogatz says, the “prosecution’s argument was that a pattern of spousal abuse reflected a motive to kill.” Unfortunately however the prosecution did not highlight a crucial point about their case: Nicole Brown Simpson was dead.

That, you might think, ought to be obvious in a murder trial, but because the prosecution did not underline the fact that Nicole was dead the defense, led on this issue by famed trial lawyer Alan Dershowitz, could (and did) argue that “even if the allegations of domestic violence were true, they were irrelevant.” As Dershowitz would later write, the defense claimed that “‘an infinitesimal percentage—certainly fewer than 1 of 2,500—of men who slap or beat their domestic partners go on to murder them.’” Ergo, even if battered women do tend to be murdered by their batterers, that didn’t mean that this battered woman (Nicole Brown Simpson) was murdered by her batterer, O.J. Simpson.

In a narrow sense, of course, Dershowitz’s claim is true: most abused women, like most women generally, are not murdered. So it is absolutely true that very, very few abusers are also murderers. But as Strogatz says, the defense’s argument was a very slippery one.

It’s true in other words that, as Strogatz says, “both sides were asking the jury to consider the probability that a man murdered his ex-wife, given that he previously battered her.” But to a mathematician like Strogatz, or his statistician colleague I.J. Good—who first tackled this point publicly—this is the wrong question to ask.

“The real question,” Strogatz writes, is: “What’s the probability that a man murdered his ex-wife, given that he previously battered her and she was murdered?” That’s the question that applied in the Simpson case: Nicole Simpson had been murdered. If the prosecution had asked the right question in turn, the answer to it—that is, the real question, not the poorly-asked or outright fraudulent questions put by both sides at Simpson’s trial—would have been revealed to be about 90 percent.

To run through the math used by Strogatz quickly (but still capture the basic points): of a sample of 100,000 battered American women, we could expect about 5 of them to be murdered by random strangers any given year, while we could also expect about 40 of them to be murdered by their batterers. So of the 45 battered women murdered each year per 100,000 battered women, about 90 percent of them are murdered by their batterers.

In a very real sense then, the prosecution lost its case against O.J. because it did not present its probabilistic evidence correctly. Interviewed years later for the PBS program, Frontline, Robert Ball, a lawyer for one of the jurors on the Simpson case, Brenda Moran, said that according to his client, the jury thought that for the prosecution “to place so much stock in the notion that because [O.J.] engaged in domestic violence that he must have killed her, created such a chasm in the logic [that] it cast doubt on the credibility of their case.” Or as one of the prosecutors, William Hodgman, said after the trial, the jury “didn’t understand why the prosecution spent all that time proving up the history of domestic violence,” because they “felt it had nothing to do with the murder case.” In that sense, Hodgman admitted, the prosecution failed because they failed to close the loop in the jury’s understanding—they didn’t make the point that Strogatz, and Good before him, say is crucial to understanding the probabilities here: the fact that Nicole Brown Simpson had been murdered.

I don’t know, of course, to what degree distrust of scientific or rational thought played in the jury’s ultimate decision—certainly, as has been discovered in recent years, it is the case that crime laboratories have often been accused of “massaging” the evidence, particularly when it comes to African-American defendants. As Spencer Hsu reported in the Washington Post, for instance, just in April of this year the “Justice Department and FBI … formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence.” Yet, while it’s obviously true that bad scientific thought—i.e., “thought” that isn’t scientific at all—ought to be quashed, it’s also I think true that there is a pattern of distrust of that kind of thinking that is not limited to jurors in Los Angeles County, as I discovered this weekend at the Medinah Classic.

The Classic is a member-guest tournament, and member-guests are golf tournaments consisting of two-man teams made up by a country club member and his guest. They are held by country clubs around the world, played according to differing formats but usually dependent upon each golfer’s handicap index: the number assigned by the United States Golf Association’s computer after the golfer pays a fee and inputs his scores into the USGA’s computer system. (It’s similar to the way that carrying weights allows horses of different sizes to race each other, or how different weight classes allows boxing or wrestling to be fair.) Medinah’s member-guest tournament is, nationally, one of the biggest because of the number of participants: around 300 golfers every year, divided into three flights according to handicap index (i.e. ability). Since Medinah has three golf courses, it can easily accommodate so many players—but what it can’t do, however, is adequately police the tournament’s entrants, as the golfers I caddied for discovered.

Our tournament began with the member shooting an amazing 30, after handicap adjustment, on the front nine of Medinah’s Course Three, the site of three U.S. Opens, two PGA Championships, numerous Western Opens (back when they were called Western Opens) and a Ryder Cup. A score of 30 for nine holes, on any golf course, is pretty strong—but how much more so on a brute like that course, and how much more so again in the worst of the Classic’s three flights? I thought so, and said so to the golfers I was caddieing for after our opening round. They were kind of down about the day’s ending—especially the guest, who had scored an eight on our last hole of the day. Despite that I told my guys that on the strength the member’s opening 30, if we weren’t just outright winning the thing we were top three. As it turned out, I was correct—but despite the amazing showing we had on the tournament’s first day, we would soon discover that there was no way we could catch the leading team.

In a handicapped tournament like the Classic, what matters isn’t so much what any golfer scores, but what he scores in relation to the handicap index. Thus, the member half of our member-guest team hadn’t actually shot a 30 on the front side of Medinah’s Course 3—which certainly would have been a record for an amateur tournament, and I think a record for any tournament at Medinah ever—but instead had shot a 30 considering the shots his handicap allowed. His score, to use the parlance, wasn’t gross but rather net: my golfer had shot an effective six under par according to the tournament rules.

Naturally, such an amazing score might raise questions: particularly when it’s shot as part of the flight reserved for the worst players. Yet my player has a ready explanation for why he was able to shoot a low number (in the mid 40s) and yet still have a legitimate handicap: he has a legitimate handicap—a congenital deformity in one of his ankles. The deformation is not enough to prevent him from playing, but as he plays—and his pain medications wear off—he usually tires, which is to say that he can very often shoot respectable scores in the first nine holes, and horrific scores on the second nine holes. His actual handicap, in other words, causes his golf handicap index to be askew slightly from reality.

Thus, he is like the legendary Sir Gawain, who according to Arthurian legend tripled his strength at noon but faded as the sun set—a situation that the handicap system is ill-designed to handle. Handicap indexes presume roughly the same ability at the beginning of a round as at the end, so in this Medinah member’s case his index understates his ability at the beginning of his round while wildly overstating it at the end. In a sense then it could perhaps be complained that this member benefits from the handicap system unfairly—unless you happen to consider that the man walks in nearly constant pain every day of his life. If that’s “gaming the system” it’s a hell of a way to do it: getting a literal handicap to pad your golf handicap would obviously be absurd.

Still, the very question suggests the great danger of handicapping systems, which is one reason why people have gone to the trouble of investigating whether there are ways to determine whether someone is taking advantage of the handicap system—without using telepathy or some other kind of magic to determine the golfer’s real intent. The most important of the people who have investigated the question is Dean L. Knuth—the former Senior Director of Handicapping for the United States Golf Association, a man whose nickname is the “Pope of Slope.” In that capacity Mr. Knuth developed the modern handicapping system—and a way to calculate the odds of a person of a given handicap shooting a particular score.

In this case, my information is that the team that ended up winning our flight—and won the first round—had a guest player who represented himself as possessing a handicap index of 23 when the tournament began. For those who aren’t aware, a 23 is a player who does not expect to play better than a score of ninety during a round of golf, when the usual par for most courses is 72. (In other words, a 23 isn’t a very good player.) Yet this same golfer shot a gross 79 during his second round for what would have been a net 56: a ridiculous number.

Knuth’s calculations reflect that: they judge that the odds of someone shooting a score so far below his handicap to be on the order of several tens of thousands to one, especially in tournament conditions. In other words, while my player’s handicap wasn’t a straightforward depiction of his real ability, it did adequately capture his total worth as a golfer. This other player’s handicap though sure appeared to many, including one of the assistant professionals who went out to watch him play, to be highly suspect.

That assistant professional, who is a five handicap himself, said that after watching this guest play he would hesitate to play him straight up, much less giving the fellow ten or more shots: the man not only was hitting his shots crisply, but also hit shots that even professionals fear, like trying to get a ball to stop on a downslope. So for the gentleman to claim to be a 23 handicap seemed, to this assistant professional, to be incredibly, monumentally, improbable. Observation then seems to confirm what Dean Knuth’s probability tables would suggest: the man was playing with an improper handicap.

What happened as the tournament went along also appears to indicate that at least Medinah’s head professional was aware that the man’s reported handicap index wasn’t legitimate: after the first round, in which that player shot a similarly suspect score as his second round 79 (I couldn’t discover what it was precisely), his handicap was adjusted downwards, and after that second round 79 more shots got knocked off his initial index. Yet although there was a lot of complaining on the part of fellow competitors, no one was willing to take any kind of serious action.

Presumably, this inaction was on a theory similar to the legal system’s presumption of innocence: maybe the man just really had “found his swing” or “practiced really hard” or gotten a particularly good lesson just before arriving at Medinah’s gates. But to my mind, such a presumption ignores, like the O.J. jury did, the really salient issue: in the Simpson case, that Nicole was dead; in the Classic, the fact that this team was leading the tournament. That was the crucial piece of data: it wasn’t just that this team could be leading the tournament, it was that they were leading the tournament—just in the same way that, while you couldn’t use statistics to predict whether O.J. Simpson would murder his ex-wife Nicole, you certainly can use statistics to say that O.J. probably murdered Nicole once Nicole was murdered.

The fact in other words that this team of golfers was winning the tournament was itself evidence they were cheating—why would anyone cheat if they weren’t going to win as a result? That doesn’t mean, to be sure, that winning constitutes conclusive evidence of fraud—just as probabilistic evidence doesn’t mean that O.J. must have killed Nicole—but it does indicate the need for further investigation, and suggests what presumption an investigation ought to pursue. Particularly by the amount of the lead: by the end of the second day, that team was leading by more than twenty shots over the next competitors.

Somehow however it seems that Americans have lost the ability to see the obvious. Perhaps that’s through the influence of films from the 1970s like Caddyshack or Star Wars: both films, interestingly, feature scenes where one of the good guys puts on a blindfold in order to “get in touch” with some cosmic quality that lies far outside the visible spectrum. (The original Caddyshack script actually cites the Star Wars scene.) But it is not necessary to blame just those films themselves: as Thomas Frank says in his book The Conquest of Cool, one of America’s outstanding myths represents the world as a conflict between all that is “tepid, mechanical, and uniform” versus the possibility of a “joyous and even a glorious cultural flowering.” In the story told by cultural products like Caddyshack, it’s by casting aside rational methods—like Luke Skywalker casting aside his targeting computer in the trench of the Death Star—that we are all going to be saved. (Or, as Rodney Dangerfield’s character puts it at the end of Caddyshack, “We’re all going to get laid!”) That, I suppose, might be true—but perhaps not for the reasons advertised.

After all, once we’ve put on the blindfold, how can we be expected to see?