Forked

Alice came to a fork in the road. “Which road do I take,” she asked.
“Where do you want to go?” responded the Cheshire Cat.
“I don’t know,” Alice answered.
“Then,” said the Cat, “it doesn’t matter.”
—Lewis Carroll. Alice’s Adventures in Wonderland. (1865).

 

At Baden Baden, 1925, Reti, the hypermodern challenger, opened with the Hungarian, or King’s Fianchetto; Alekhine—the only man to die still holding the title of world champion—countered with an unassuming king’s pawn to e5. The key moment did not take place, however, until Alekhine threw his rook nearly across the board at move 26, which appeared to lose the champion a tempo—but as C.J.S. Purdy would write for Chess World two decades, a global depression, and a world war later, “many of Alekhine’s moves depend on some surprise that comes far too many moves ahead for an ordinary mortal to have the slightest chance of foreseeing it.” The rook move, in sum, resulted in the triumphant slash of Alekhine’s bishop at move 42—a move that “forked” the only two capital pieces Reti had left: his knight and rook. “Alekhine’s chess,” Purdy would write later, “is like a god’s”—an hyperbole that not only leaves this reader of the political scientist William Riker thankful that the chess writer did not see the game Riker saw played at Freeport, 1858, but also grateful that neither man saw the game played at Moscow, 2016.

All these games, in other words, ended with what is known as a “fork,” or “a direct and simultaneous attack on two or more pieces by one piece,” as the Oxford Companion to Chess defines the maneuver. A fork, thereby, forces the opponent to choose; in Alekhine’s triumph, called “the gem of gems” by Chess World, the Russian grandmaster forced his opponent to choose which piece to lose. Just so, in The Art of Political Manipulation, from 1986, University of Rochester political scientist William Riker observed that “forks” are not limited to dinner or to chess. In Political Manipulation Riker introduced the term “heresthetics,” or—as Norman Schofield defined it in 2006—“the art of constructing choice situations so as to be able to manipulate outcomes.” Riker further said that  “the fundamental heresthetical device is to divide the majority with a new alternative”—or in other words, heresthetics is often a kind of political fork.

The premier example Riker used to illustrate such a political forking maneuver was performed, the political scientist wrote, by “the greatest of American politicians,” Abraham Lincoln, at the sleepy Illinois town of Freeport during the drowsy summer of 1858. Lincoln that year was running for the U.S. Senate seat for Illinois against Stephen Douglas—the man known as “the Little Giant” both for his less-than-imposing frame and his significance in national politics. So important had Douglas become by that year—by extending federal aid to the first “land grant” railroad, the Illinois Central, and successfully passing the Compromise of 1850, among many other achievements—that it was an open secret that he would run for president in 1860. And not merely run; the smart money said he would win.

Where the smart money was not was on Abraham Lincoln, a lanky and little-known one-term congressman in 1858. The odds against the would-be Illinois politician were so long, in fact, that according to Riker Lincoln had to take a big risk to win—which he did, by posing a question to Douglas at the little town of Freeport, near the Wisconsin border, towards the end of August. That question was this: “Can the people of a United States Territory, in any lawful way, against the wish of any citizen of the United States, exclude slavery from its limits prior to the formation of a state constitution?” It was a question, Riker wrote, that Lincoln had honed “stilletto-sharp.” It proved a knife in the heart of Stephen Douglas’ ambitions.

Lincoln was, of course, explicitly against slavery, and therefore thought that territories could ban slavery prior to statehood. But many others thought differently; in 1858 the United States stood poised at a precipice that, even then, only a few—Lincoln among them—could see. Already, the nation had been roiled by the Kansas-Nebraska Act of 1854; already, a state of war existed between pro- and anti-slavery men on the frontier. The year before, the U.S. Supreme Court had outlawed the prohibition of slavery in the territories by means of the Dred Scott decision—a decision that, in his “House Divided” speech in June that same year, Lincoln had already charged Douglas with conspiring with the president of the United States, James Buchanan, and Supreme Court Chief Justice Roger Taney to bring about. What Lincoln’s question was meant to do, Riker argued, was to “fork” Douglas between two constituencies: the local Illinois constituents who could return, if they chose, Douglas to the Senate in 1858—and the larger, national constituency that could deliver, if they chose, Douglas the presidency in 1860.

“If Douglas answered yes” to Lincoln’s question, Riker wrote, and thereby said that a territory could exclude slavery prior to statehood, “then he would please Northern Democrats for the Illinois election”—because he would take an issue away from Lincoln by explicitly stating they shared the same opinion. If so, he would take away one of Lincoln’s chief weapons—a weapon especially potent in far northern, German-settled, towns like Freeport. But what Lincoln saw, Riker says, is that if Douglas said yes he would also earn the enmity of Southern slaveowners, for whom it would appear “a betrayal of the Southern cause of the expansion of slave territory”—and thusly cost him a clean nomination for the leadership of the Democratic Party as candidate for president in 1860. If, however, Douglas answered no, “then he would appear to capitulate entirely to the Southern wing of the party and alienate free-soil Illinois Democrats”—thereby hurting “his chances in Illinois in 1858 but help[ing] his chances for 1860.” In Riker’s view, in other words, at Freeport in 1858 Lincoln forked Douglas much as the Russian grandmaster would fork his opponent at the German spa in 1925.

Yet just as that late winter game was hardly the last time the maneuver was used in chess, “forking” one’s political opponent scarcely ended in the little nineteenth-century Illinois farm village. Many of Hillary Clinton’s supporters in 2016 now believe that the Russians “interfered” with the American election—but what hasn’t been addressed is how the Russian state, led by Putin, could have interfered with an American election. Like a vampire who can only invade a home once invited, anyone attempting to “interfere” with an election must have some material to work with; Lincoln’s question at Freeport, after all, exploited a previously-existing difference between two factions within the Democratic Party. If the Russians did “interfere” with the 2016 election, that is, they could only have done so if there already existed yet another split within the Democratic ranks—which, as everyone knows, there was.

“Not everything is about an economic theory,” Hillary Clinton claimed in a February of 2016 speech in Nevada—a claim common enough to anyone who’s been on campus in the past two generations. After all, as gadfly Thomas Frank has remarked (referring to the work of James McGuigan), the “pervasive intellectual reflex” of our times is the “‘terror of economic reductionism.’” The idea that “not everything is about economics” is the core of what is sometimes known as the “cultural left,” or what Penn State University English professor (and former holder of the Paterno Chair) Michael Bérubé has termed “the left that aspires to analyze culture” as opposed to “the left that aspires to carry out public policy.” Clinton’s speech largely echoed the views of that “left,” which—according to the late philosopher Richard Rorty, in the book that inspired Bérubé’s remarks above—is more interested in “remedies … for American sadism” than those “for American selfishness.” It was that left that the rest of Clinton’s speech was designed to attract.

“If we broke up the big banks tomorrow,“ Clinton went on to ask after the remark about economic theory, “would that end racism?” The crowd, of course, answered “No.” “Would that end racism?” she continued, and then called again using the word “sexism,” and then again—a bit more convoluted, now—with “discrimination against the LGBT community?” Each time, the candidate was answered with a “No.” With this speech, in other words, Clinton visibly demonstrated the arrival of this “cultural left” at the very top of the Democratic Party—the ultimate success of the agenda pushed by English professors and others throughout the educational system. If, as Richard Rorty wrote, it really is true that “the American Left could not handle more than one initiative at a time,” so that “it either had to ignore stigma in order to concentrate on money, or vice versa,” then Clinton’s speech signaled the victory of the “stigma” crowd over the “money” crowd. Which is why what Clinton said next was so odd.

The next line of Clinton’s speech went like this: “Would that”—i.e., breaking up the big banks—“give us a real shot at ensuring our political system works better because we get rid of gerrymandering and redistricting and all of these gimmicks Republicans use to give themselves safe seats, so they can undo the progress we have made?” It’s a strange line; in the first place, it’s not exactly the most euphonious group of words I’ve ever heard in a political speech. But more importantly—well, actually, breaking up the big banks could perhaps do something about gerrymandering. According to OpenSecrets.org, after all, “72 percent of the [commercial banking] industry’s donations to candidates and parties, or more than $19 million, went to Republicans” in 2014—hence, maybe breaking them up could reduce the money available to Republican candidates, and so lessen their ability to construct gerrymandered districts. But, of course, doing so would require precisely the kinds of thought pursued by the “public policy” left—which Clinton had already signaled she had chosen against. The opening lines of her call-and-response, in other words, demonstrated that she had chosen to sacrifice the “public policy” left—the one that speaks the vocabulary of science—in favor of the “cultural left”—the one that speaks the vocabulary of the humanities. By choosing the “cultural left,” Clinton was also in effect saying that she would do nothing about either big banks or gerrymandering.

That point was driven home in an article in Fivethirtyeight this past October. In “The Supreme Court Is Allergic To Math,” Oliver Roeder discussed the case of Gill v. Whitford—a case that not only “will determine the future of partisan gerrymandering,” but also “hinges on math.” At issue in the case is something called “the efficiency gap,” which calculates “the difference between each party’s ‘wasted’ votes—votes for losing candidates and votes for winning candidates beyond what the candidate needed to win—and divide that by the total number of votes cast.” The basic argument, in other words, is fairly simple: if a mathematical test determines that a given arrangement of legislative districts provides a large difference, that is evidence of gerrymandering. But in oral arguments, Roeder went on to say, the “most powerful jurists in the land” demonstrated “a reluctance—even an allergy—to taking math and statistics seriously.” Chief Justice John Roberts, for example, said it “may simply be my educational background, but I can only describe [the case] as sociological gobbledygook.” Neil Gorsuch, the man who received the office that Barack Obama was prevented from awarding, compared “the metric to a secret recipe.” In other words, in this case it was the disciplines of mathematics and above all, statistics, that are on the side of those wanting to get rid of gerrymandering, not those analyzing “culture” and fighting “stigma”—concepts that were busy being employed by the justices, essentially to wash their hands of the issue of gerrymandering.

Just as, in other words, Lincoln exploited the split between Douglas’ immediate voters in Illinois who could give him the Senate seat, and the Southern slaveowners who could give him the presidency, Putin (or whomever else one wishes to nominate for that role) may have exploited the difference between Clinton supporters influenced by the current academy—and those affected by the yawning economic chasm that has opened in the United States. Whereas academics are anxious to avoid discussing money in order not to be accused of “economic reductionism,” in other words, the facts on the ground demonstrate that today “more money goes to the top (more than a fifth of all income goes to the top 1%), more people are in poverty at the bottom, and the middle class—long the core strength of our society—has seen its income stagnate,” as Nobel Prize-winning economist Joseph Stiglitz put the point in testimony to the U.S. Senate in 2014. Furthermore, Stiglitz noted, America today is not merely “the advanced country … with the highest level of inequality, but is among those with the least equality of opportunity.” Or in other words, as David Rosnick and Dean Baker put the point in November of that same year, “most [American] households had less wealth in 2013 than they did in 2010 and much less than in 1989.” To address such issues, however, would require precisely the sorts of intellectual tools—above all, mathematical ones—that the current bien pensant orthodoxy of the sort represented by Hillary Clinton, the orthodoxy that abhors sadism more than selfishness, thinks of as irrelevant.

But maybe that’s too many moves ahead.

Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.

 

 

 

 

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.