To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

Advertisements

An Unfair Game

The sheer quantity of brain power that hurled itself voluntarily and quixotically into the search for new baseball knowledge was either exhilarating or depressing, depending on how you felt about baseball.
Moneyball: The Art of Winning an Unfair Game

“Today, in sports,” wrote James Surowiecki in The New Yorker a couple of years ago, “what you are is what you make yourself into”—unlike forty or fifty years ago, nearly all elite-level athletes have a tremendous entourage of dietitians, strength coaches, skill coaches, and mental coaches to help them do their jobs. But not just athletes: at the team level, coaches and scouts have learned to use data both to recruit the best players and turn that talent into successful strategies. Surowiecki notes for instance that when sports columnist Mark Montieth went back and looked at old NBA games from the 1950s and 60s, he found that NBA coaches at the time “hadn’t yet come up with offenses sophisticated enough to create what are considered good shots today.” That improvement, however, is not limited to sports: Surowiecki also notes that in fields as varied as chess and classical music, airline safety to small-unit infantry tactics, the same basic sorts of techniques have greatly improved performance. What “underlies all these performance revolutions,” Surowiecki says, “is captured by the Japanese term kaizen, or continuous improvement”—that is, the careful analysis of technique. Still, what is more curious about the fact that so many disparate fields have been improved by kaizen-type innovations is not that they can be applied so variously, but that they have not been applied to many other fields: among them, Surowiecki lists medicine and education. Yet the field that might be ripest for the advent of kaizen—and with the greatest payoff for Americans, even greater than the fact that lemon cars are for the most part a thing of the past—is politics.

To be sure, politics doesn’t lend itself particularly well to training in a wind tunnel, as the top-level cyclists Surowiecki discusses do. Nor are politics likely to be improved especially by ensuring, like the Portland Trailblazers do, that everyone in government gets enough rest, or that they eat correctly—although one imagines that in the case of several politicians, the latter might greatly improve their performance. But while the “taking care of the talent” side of the equation might not, in the field of politics, be the most efficient use of resources, certainly the kinds of techniques that have helped teams improve their strategies just might. For example, in baseball, examining statistical evidence for signs of how better to defend against a particular batter has become wildly more popular in recent years—and baseball’s use of that strategy has certain obvious applications to American politics.

That team-level strategy is the “infield shift,” the technique whereby fielders are positioned on the field in unusual structures in order to take account of a particular batter’s tendencies. If, for example, a particular player tends to hit the ball to the left side of the field—a tendency readily observable in this age of advanced statistical analysis in the post-Moneyball era—teams might move the second baseman (on the right side of the infield) to behind second base, or even further left, just as to have an extra fielder where the batter tends to place his hits. According to the Los Angeles Times, the use of the “infield shift” has become far greater than it ever has: the “number of shifts,” the Times’ Zach Helfand wrote last year, “has nearly doubled nearly every year since 2011, from 2,357 to 13,298 last year.” This past season (2015), the use of shifts had exploded again, so that there were 10,262 uses of a shift “by the All-Star break,” Helfand reported. The use of shifts is growing at such an exponential rate, of course, because they work: the “strategy saved 190 runs in the first half this (2015) season, according to estimates from Baseball Info Solutions,” Helfand says. The idea makes intuitive sense: putting players where they are not needed is an inefficient use of a team’s resources.

The infield shift is a strategy, as it happens, that one of the greatest of America’s Supreme Court justices, Earl Warren, would have approved of—because he, in effect, directed the greatest infield shift of all time. About the line of cases now known as the “apportionment cases,” the former Chief Justice wrote that, despite having presided over such famous cases as Brown v. Board of Education (the case that desegregated American schools) or Miranda v. Arizona (which ensured that defendants would be represented by counsel), he was most proud of his role in these cases, which took on the fact that the “legislatures of more than forty states were so unbalanced as to give people in certain parts of them vastly greater voting representation than in others.” In the first of that line, Baker v. Carr, the facts were that whereas the population of the state of Tennessee had grown from 487,380 to 2,092,891 since 1900, and that said population had not been distributed evenly throughout the state but instead was concentrated in urban areas like Nashville and Memphis, still Tennessee had not reapportioned its legislature since 1901. This, said Warren, was ridiculous: in effect, Tennessee’s legislature was not only not shifted, but it was wrongly shifted. If the people of Tennessee were a right-handed pull-hitter (i.e., one that tends to hit to the left side of the field), in other words, Tennessee’s legislature had the second baseman, the shortstop, and the third baseman on the right side of the field—i.e., toward first base, not third.

“Legislators represent people, not trees or acres,” Warren wrote for a later “apportionment case,” Reynolds v. Sims (about Alabama’s legislature, which was, like Tennessee’s, also wildly malapportioned). What Warren was saying was that legislators ought to be where the constituents are—much as baseball fielders ought to be where the ball is likely to be hit. In Reynolds, the Alabama legislature wasn’t: because the Alabama Constitution provided that the state senate would be composed of one senator from each Alabama county, some senate districts had voting populations as much as 41 times that of the least populated. Warren’s work remedied that vast disparity: as a result of the U.S. Supreme Court’s decisions in Baker, Reynolds, and the other cases in the “apportionment” line, nearly every state legislature in the United States was forced to redraw boundaries and, in general, make sure the legislators were where the people are.

Of course, it might be noted that the apportionment cases were decided more than fifty years ago, and that the injustices they addressed have now all been corrected. Yet, it is not merely American state legislatures that were badly misaligned with the American population. After all, if the state senate of Alabama was badly malapportioned through much of the twentieth century and before, it is also true that the Senate of the United States continues to be malapportioned today: if the difference between Alabama’s least populated county and its most in the early 1960s was more than 40 times, the difference between the number of voters in Wyoming, the least populated American state, and California, the most, is now more than 60 times—and yet each state has precisely the same number of senators in the U.S. Senate. These differences, much like infield shifts, have consequences: in such books as Sizing Up the Senate: The Unequal Consequences of Equal Representation, political scientists like Frances E. Lee and Bruce I. Oppenheimer have demonstrated that, for example, “less populous states consistently receive more federal funding than states with more people.” Putting legislators where the people aren’t, in other words, has much the same effect as not shifting a baseball team’s infield: it allows money, and the other goods directed by a legislature, to flow—like hits—in directions that it wouldn’t were there fielders, or legislators, in place to redirect those flows.

To say that moving America’s legislature around would have an instantaneously positive effect on American life, of course, is likely to overstate the effect such a move might make: some batters in the major leagues, like Albert Pujols, have been able to overcome the effects of an infield shift. (Pujols, it seems, bats 28 points higher when a shift is on than when it isn’t, Zach Helfand reported.) Yet, teams still use the shift on Pujols—on the theory, apparently, that even though Pujols might overall bat better, still it is unlikely that he can keep it up, first, and second that on the occasions that he misses “hitting the gaps,” a fielder will be there.

Similarly, although it might be so that, as Senator Everett Dirksen of Illinois argued in the aftermath of 1964’s Reynolds, “the forces of our national life are not brought to bear on public questions solely in proportion to the weight of numbers,” the forces behind such examples as Billy Beane’s Oakland As teams—assembled largely on the weight of the statistics put up by the players—or Japanese car companies—which redesigned workspaces, Surowiecki says, “so workers didn’t have to waste time twisting and turning to reach their tools”—beg to differ: although not every question can be solved by the application of kaizen-like techniques, surely a number of them can.

Among them, it may be, is gun-control legislation, which has continually been held up by structural features of the American Congress that have much to do with malapportionment. Surely, in other words, with regard to gun policy it matters that the Senate is heavily stacked in favor of mostly-rural states. Were it not, it is much easier to imagine the United States having a gun policy much more in line with that of other industrialized democracies. Which, in the light of incidents like the recent shooting deaths in Orlando, is to shine a new light on an old baseball phrase.

That phrase?

“Hit ’em where they ain’t.”

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

The Judgment of Paris

Eris [Strife] tossed an apple to Hera, Athena, and Aphrodite … and Zeus bade Hermes escort them to [Paris] on Ide, to be judged … They offered [Paris] gifts: Hera said if she were chosen fairest of all women, she would make him king of all men; Athena promised him victory in war; and Aphrodite promised him Helene in marriage.
So he chose Aphrodite.
—Pseudo-Apollodorus. c. 2nd century AD.

 

Watching the HBO series Silicon Valley the other day, I came across a scene depicting a judged competition for venture capital dollars between different development teams—all of which promised to, for example, “make the world a better place through Paxos algorithms for consensus protocols,” or to “make the world a better place through canonical data models to communicate between endpoints.” “Making the world a better place” is a common line these days from the tech world: the language of “disruption” and “change agents” and “paradigm shifts” is everywhere. Yet, although technology is certainly having, directly or indirectly, an effect on everyone’s life, the technological revolution has had very little impact on traditional areas of political economy: for example, the productivity gains of the “New Economy” have had essentially zero effect on wages (upending a relationship that nearly every economist would have said was inherent to the universe prior to recent decades). Meanwhile, due to efforts like voter ID laws, many states are teetering perilously close to reviving Jim Crow. Still, to many, a world in which technological progress seems limitless while political progress appears impossible might not seem a matter for comment: why should technology have anything to do with politics? But to those  who first theorized that electing officials might be better than waiting around for them to be born, computers would not have been in a different category than voting. To them, voting was a kind of computing.

One of those people hailed from the island of Majorca, in the Mediterranean Sea off the Spanish coast; his name was Ramon Llull. Born in 1232, Llull lived during a time when Spain was divided between a Catholic northern part of the Iberian peninsula and a Muslim southern part. (Spain would not be unified until the Fall of Grenada, in 1492.) Not that that bothered Llull as a young man: in his autobiography, the Vita coaetanea (or “Daily Life”), he narrates that in his twenties he “was very given to composing worthless songs and poems and to doing other licentious things.” But when he was 33—a portentous age during the Middle Ages: supposedly Christ’s age at the Crucifixion, it was also the number of cantos in each part of Dante’s Divine Comedy—Llull experienced a vision of “our Lord Jesus Christ on the Cross, as if suspended in mid-air.” From that moment, Llull decided his life had three missions: to convert Muslims to Christianity even if it meant his own martyrdom (which he may—or may not—have achieved in the the city of Bougie, in what is now Algeria, in the year 1315), to found institutions to teach foreign languages (which he achieved in 1311, when the Council of Vienne ordered the creation of chairs of Hebrew and Arabic established at the universities of Paris, Oxford, Bologna, and Salamanca), and to write a book about how to convert Muslims. This last of Llull’s three missions has had profound consequences: it is the starting point that leads both to Silicon Valley—and Silicon Valley.

In order to convert Muslims Llull had, somewhere around the year 1275, the startling idea that Christianity could be broken down into constituent parts: he was a believer in what one commenter has called “conceptual atomism, the belief that the majority of concepts are compounds constructed from a relatively small number of primitive concepts.” As such, Llull thought that the Christian God possessed a certain number of relevant qualities, that God had limited the world to a certain number of what we today would call logical operations, and that the world contained a limited number of both virtues and vices. With these “primitive concepts” in hand Llull described how to construct a kind of calculator: a series of concentric wheels within wheels that could be turned so that, say, God’s attributes could be lined up with a logical category, and then with a particular vice (or virtue). By this means, a Christian missionary could reply to any objection a Muslim (or anyone else) might raise to Christianity merely by turning a wheel, which made Llull’s machine a forerunner to today’s computers—and not simply by analogy.

Llull’s description of his conversion machine, that is, ended up traveling from Spain to Germany, where—three centuries later—a philosopher and mathematician found it. Llull’s work became the basis of Gottfried Leibniz’s invention of what’s become known as a “pinwheel calculator,” which Leibniz described in his Machina arithmetica in qua non additio tantum et subtractio sed et multiplicatio nullo, diviso vero paene nullo animi labore peragantur of 1685. In 1694, Leibniz was able to build a practical model of the machine, which he improved again in 1706. These models later became the basis of Frenchman Thomas de Colmar’s Arithmometer of 1851, which in turn became the basis of mechanical calculators until—in 1937—Howard Aiken convinced Thomas Watson to fund the machine known as the Automatic Sequence Controlled Calculator: in other words, IBM’s “Mark I,” the world’s first computer.

That story is not unknown of course, especially to those with an interest in both history and computing. But it is a story that is not often told in tandem with another of Llull’s interests: the creation of what today would be called electoral systems. In three different works, Llull described what is essentially the same system of voting—a system later re-invented by Marie Jean Antoine Nicolas Caritat, the Marquis de Condorcet, in the late eighteenth century. In this method, candidates—Llull used religious examples, such as the election of a monastery’s abbot—are paired off with each other, and voters decide on which is the most worthy of the pair. This continues until every candidate is paired with every other, and then the candidate with the most votes is declared the winner. (In modern political science, that winner is called the “Condorcet winner.”) In this fashion—which in one place Llull says he invented in Paris, in 1299—an election is a kind of round-robin tournament, somewhat analogous to soccer’s World Cup.

Yet it’s also possible to take the analogy in another direction—that is, towards Palo Alto. A computer, after all, is a machine, and like all machines is meant to do particular jobs; what makes the computer different from other machines is just the number of different jobs one can do. But being capable of doing so many different kinds of tasks—from assembling emails to calculating the distance of a faint star—makes computers need what are called schedulers: algorithms that tell the computer in what order to do the work it has been assigned. One kind of scheduler is called a “round-robin” scheduler: in this scheme, the algorithm tells the computer to handle each task a certain amount of time and then to move on to the next if the job is not completed by the deadline. The computer then cycles through each task, working on each for the same amount of time, until each job is done. A variant, called “weighted round robin,” more precisely makes the analogy to Llull’s electoral scheme: in this variant, each task is assigned a “weight,” which signifies just how much processing capacity the job will need—the scheduler will then assign priority according to that weight through comparing each job’s weight to every other job’s weight. To each job, the weighted round robin scheduler assigns computing power—according to its need, as it were.

In this way, in other words, democracy can be demystified, instead of being fetishized as the kind of mystical process it sometimes is. Defenders of democracy sometimes cite, for example, the Latin phrase, “vox populi, vox Dei”: “the voice of the people is the voice of God.” But democracy should not be defended by that means: democracy should not be thought of as a kind of ritual, a means of placating a suspicious and all-powerful deity. Instead, it ought to be thought of as a kind of algorithm, like a round-robin sorting algorithm. Such, at least, was how one of the earliest English uses of the “vox populi” phrase described it; in a 1709 London pamphlet—produced, coincidentally or not, just after Leibniz’s 1706 calculator—entitled “Vox Populi, Vox Dei,” the anonymous author defended the following idea:

There being no natural or divine Law for any Form of Government, or that one Person rather than another should have the sovereign Administration of Affairs, or have Power over many thousand different Families, who are by Nature all equal, being of the same Rank, promiscuously born to the same Advantages of Nature, and to the Use of the same common Faculties; therefore Mankind is at Liberty to chuse what Form of Government they like best.

To put the point another way, the author is saying that each person ought to be treated somewhat like the round-robin scheduler treats each of its tasks: as worthy of the same attention as every other. Like the computer’s jobs, every person is treated equally—on earth, as it is in Silicon Valley. The wonder, of course, is that the doyens of the Valley have not learned the lesson yet—but there is still time, as always, for a snake to disturb that Eden. When it does, perhaps the revolutionary rhetoric of the lords of California’s silicate kingdom might begin to match reality—and Silicon Valley may indeed prove a lever to “change the world.” Which, after all, might not be unprecedented in history: in one of the earliest stories about a kind of Condorcet, or Llullian, election—the Judgment of Paris—the contest between the three goddesses (Hera, Aphrodite, and Athena) is provoked by a single object.
That object?

An apple.

The Commanding Heights

The enemy increaseth every day; 
We, at the height, are ready to decline.
Julius Caesar. Act IV, Scene 3.

 

“It’s Toasted”: the two words that began the television series Mad Men. The television show’s protagonist, Don Draper, comes up with them in a flash of inspiration during a meeting with the head of Draper’s advertising firm’s chief client, cigarette brand Lucky Strikes: like all cigarette companies, Luckies have to come up with a new campaign in the wake of a warning from the Surgeon General regarding the health risks of smoking. Don’s solution is elegant: by simply describing the manufacturing process of making Luckies—a process that is essentially the same as all other cigarettes—the brand does not have to make any kind of claim about smokers’ health at all, and thusly can bypass any consideration of scientific evidence. It’s a great way to introduce a show about the advertising business, as well as one of the great conflicts of that business: the opposition between reality, as represented by the Surgeon General’s report, and rhetoric, as represented by Draper’s inspirational flash. It’s also what makes Mad Men a work of historical fiction: in the first place, as documented by Thomas Frank’s The Conquest of Cool: Business Culture, Counterculture, and the Rise of Hip Consumerism, there really was, during the 1950s and 60s, a conflict in the advertising industry between those who trusted in a “scientific” approach to advertising and those who, in Frank’s words, “deplored conformity, distrusted routine, and encouraged resistance to established power.” But that conflict also enveloped more than the advertising field: in those years many rebelled against a “scientism” that was thought confining—a rebellion that in many ways is with us still. Yet, though that rebellion may have been liberating in some senses, it may also have had certain measurable costs to the United States. Among those costs, it seems, might be height.

Height, or a person’s stature, of course is a thing that most people regard as something that is akin to the color of the sky or the fact of gravity: a baseline foundation to the world incapable of change. In the past, such results that lead one person to tower over others—or look up to them in turn—might have been ascribed to God; today some might view height as the inescapable result of genetics. In one sense, this is true: as Burkhard Bilger says in the New Yorker story that inspired my writing here, the work of historians, demographers and dietitians have shown that with regard to height, “variations within a population are largely genetic.” But while height differences within a population are, in effect, a matter of genetic chance, that is not so when it comes to comparing different populations to each other.

“Height,” says Bilger, “is a kind of biological shorthand: a composite code for all the factors that make up a society’s well-being.” In other words, while you might be a certain height, and your neighbor down the street might be taller or shorter, both of you will tend to be taller or shorter than people from a different country—and the degree of shortness or tallness can be predicted by what sort of country you live in. That doesn’t mean that height is independent of genetics, to be sure: all human bodies are genetically fixed to grow at only three different stages in our lives—infancy, between the ages of six and eight, and as adolescents. But as Bilger notes, “take away any one of forty-five or fifty essential nutrients”—at any of these stages—“and the body stops growing.” (Like iodine, which can also have an effect on mental development.) What that means is that when large enough populations are examined, it can be seen whether a population as a whole is getting access to those nutrients—which in turn means it’s possible to get a sense of whether a given society is distributing resources widely … or not.

One story Bilger tells, about Guatemala’s two main ethnic groups, illustrates the point: one of them, the Ladinos, who claim descent from the Spanish colonizers of Central America, were averagely tall. But the other group, the Maya, who are descended from indigenous people, “were so short that some scholars called them the pygmies of Central America: the men averaged only five feet two, the women four feet eight.” Since the two groups shared the same (small) country, with essentially the same climate and natural resources, researchers initially assumed that the difference between them was genetic. But that assumption turned out to be false: when anthropologist Barry Bogin measured Mayans who had emigrated to the United States, he found that they were “about as tall as Guatemalan Ladinos.” The difference between the two ethnicities was not genetic: “The Ladinos,” Bilger writes, “who controlled the government, had systematically forced the Maya into poverty”—and poverty, because it can limit access to the nutrients essential during growth spurts, is systemically related to height.

It’s in that sense that height can literally be a measurement of the degree of freedom a given society enjoys: historically, Guatemala has been a hugely stratified country, with a small number of landowners presiding over a great number of peasants. (Throughout the twentieth century, in fact, the political class was engaged in a symbiotic relationship with the United Fruit Company, an American company that possessed large-scale banana plantations in the country—hence the term “banana republic.”) Short people are, for the most part, oppressed people; tall people, conversely, are mostly free people: it’s not an accident that as citizens of one of the freest countries in the world, the Netherlands, Dutch people are also the tallest.

Americans, at one time, were the tallest people in the world: in the eighteenth century, Bilger reports, Americans were “a full three inches taller than the average European.” Even so late as the First World War, he also says, “the average American soldier was still two inches taller than the average German.” Yet, a little more than a generation later, that relation began to change: “sometime around 1955 the situation began to reverse.” Since then all Europeans have been growing, as have Asians: today “even the Japanese—once the shortest industrialized people on earth—have nearly caught up with us, and Northern Europeans are three inches taller and rising.” Meanwhile, American men are “less than an inch taller than the average soldier during the Revolutionary War.” And that difference, it seems, is not due to the obvious source: immigration.

The people that work in this area are obviously aware that, because the United States is a nation of immigrants, that might skew the height data: clearly, if someone grows up in, say, Guatemala and then moves to the United States, that could conceivably warp the results. But the researchers Bilger consulted have considered the point: one only includes native-born, English-speaking Americans in his studies, for example, while another says that, because of the changes to immigration law during the twentieth century, the United States now takes in far too few immigrants to bias the figures. But if not immigration, then what?

For my own part, I find the coincidence of 1955 too much to ignore: it’s around the mid-1950s that Americans began to question a previous view of the sciences that had grown up a few generations previously. In 1898, for example, the American philosopher John Dewey could reject “the idea of a dualism between the cosmic and the ethical,” and suggested that “the spiritual life … [gets] its surest and most ample guarantees when it is learned that the laws and conditions of righteousness are implicated in the working processes of the universe.” Even so late as 1941, intellectual magazine The New Republic could publish an obituary of the famed novelist James Joyce—author of what many people feel is the finest novel in the history of the English language, Ulysses—that proclaimed Joyce “the great research scientist of letters, handling words with the same freedom and originality that Einstein handles mathematical symbols.” “Literature as pure art,” the magazine then said, “approaches the nature of pure science”—suggesting, as Dewey said, that reality and its study did not need to be opposed to some other force, whether that be considered to be religion and morality or art and beauty. But just a few years later, elite opinion began to change.

In 1949, for instance, the novelist James Baldwin would insist, against the idea of The New Republic’s obituary, that “literature and sociology are not the same,” while a few years later, in 1958, the philosopher and political scientist Leo Strauss would urge that the “indispensable condition of ‘scientific’ analysis is then moral obtuseness”—an obtuseness that, Strauss would go on to say, “is not identical with depravity, but […] is bound to strengthen the forces of depravity.” “By the middle of the 1950s,” as Thomas Frank says, “talk of conformity, of consumerism, and of the banality of mass-produced culture were routine elements of middle-class American life”—so that “the failings of capitalism were not so much exploitation and deprivation as they were materialism, wastefulness, and soul-deadening conformity”: a sense that Frank argues provided fuel for the cultural fires of the 1960s that were to come, and that the television show Mad Men documents. In other words, during the 1950s and afterwards, Americans abandoned a scientific outlook, and meanwhile, Americans also have grown shorter—at least relative to the rest of the world. Correlation, as any scientist will tell you, does not imply causation, but it does imply that Lucky Strikes might not be unique any more—though as any ad man would tell you, “America: It’s Toast!” is not a winning slogan.