Swings And Roundabouts

In a meeting with German business leaders in 1933, Hitler declared that “democracy” (i.e., actual parliamentary control) was fundamentally incompatible with a free-market capitalist economy …
“The Supermanagerial Reich.”
    Los Angeles Review of Books. 7 November 2016
Josiah Quincy IIIJosiah Quincy III

Prior to the American Civil War the Northern politicians called “doughfaces,” mostly members of the Democratic Party, were essential to the preservation of slavery; for example, as historian Leonard Richards of the University of Massachusetts has pointed out, in 1820 “seventeen or eighteen doughfaces” made the admission of Missouri as a slave state possible, either by voting or abstaining from voting. Hence, by “voting with the South in crucial measures” throughout the pre-Civil War era, Richards says, doughfaces “turned a Southern minority position in the House, and the nation at large, into a majority political position.” Or as Josiah Quincy III—holder of the Boston trifecta as a former congressman, mayor, and president of Harvard College—called them in an 1856 speech: “‘Northern men with Southern principles’ … men who, for the sake of power or pay, were willing to do any work [slaveowners] would set them upon.” So much virtually anyone with a passing familiarity with antebellum American politics might know—but what few Americans today realize is that “doughfaces” are hardly extinct. One of them, for example, is the founder of Vox, Matthew Yglesias—which is perhaps particularly surprising because in one sense Yglesias is Quincy’s intellectual heir.

That isn’t to say that there are not some major differences between the twenty-first century blogger and the nineteenth-century patrician. Unlike the Quincys, for instance—whose founding member arrived in 1628 and bought the site of the city of Boston five years later—the Yglesiases, like so many oppressed immigrant families, began their story with hard labor in New York City sweatshops like The New Yorker and The Nation.

Aside from that vast difference however, both men are noted for their connections to highly privileged educational establishments: while Quincy matriculated at Phillips Academy—according to the school, “the oldest incorporated high school in the United States”—not only is Yglesias an alum of New York City’s exclusive private Dalton School—named by Business Insider as one of the “9 most elite prep schools in New York City” in 2015—he is also, not only like Josiah but so many other Quincys down through the centuries, a Harvard grad. And just as Quincy followed the family business into the law and politics—his father, in a historical moment that might of some interest today, was John Adams’ co-counsel in defense of the armed authorities who shot both a black man and that man’s fellow laborers in the street in March of 1770 (the incident known to history as the Boston Massacre)—so too has Yglesias joined the family business: following in the footsteps of his writer grandfather, grandmother, and father, Yglesias began blogging as an undergraduate before getting work at The American Prospect, The Atlantic, and Slate after college. Yet despite all these differences—and similarities—in their biographies, the significant likeness between the WASP educator and the Jewish-Cuban blogger lies in their arguments, which have essentially the same terms.

Quincy’s speech to his fellow townspeople in 1856 denounced the hypocrisy of “the leaders of the democracy,” by which he meant the Northern Democratic Party politicians who loudly proclaimed their allegiance to the common people against the wealthy on the one hand—a theme of the Democratic Party since at least the time of Andrew Jackson’s demand, “Let the people rule”—but on the other helped to protect the South’s “peculiar institution.” Research into history, Quincy said, would demonstrate both that “when the slaveholders have any particularly odious and obnoxious work to do, they never fail to employ the leaders of the democracy of the Free States,” and how the slaveowners ceaselessly sought “and select[ed] among the leaders of the democracies of the great States” in order to find “the most corrupt and the least scrupulous.” (The implication is, in case it isn’t clear, that at least some of the doughfaces more or less sold their votes for material gain: “To some,” Quincy says, “promises were made, by way of opiates.”) These the slaveowners would recruit for their purposes—and thereby suborn the notion of economic justice in order to protect slavery. Their example demonstrated, Quincy effectively said, just how concern for economic justice does not necessarily mean racial justice.

Quincy’s argument ought to sound familiar: it could be viewed as one side of an argument that has lately largely clustered around the journalist Thomas Frank, founder of The Baffler and author of, most famously, 2006’s What’s the Matter With Kansas? How Conservatives Won the Heart of America. According to the late New York University journalism professor Ellen Willis, for example, Frank’s latter book constituted another episode in an argument she dated back to the 1980 presidential election (but that could be dated, as I hope to have demonstrated, to 1856). Frank’s side argued, Willis said, that “the left must concentrate its energies on promoting a populist economic program, and that the Democrats, if they want to win elections, must stop being identified as the party of ‘upper middle class’ feminists, gays, and secularists”—a theory she, like Josiah Quincy in 1856, vociferously argued against.

Frank has been a popular target: the grounds Willis advanced against Frank in 2006 mirrored Jonathan Liu’s objections to Frank’s 2008 book, The Wrecking Crew: How Conservatives Rule. In The Observer that year, Liu said that Frank “insist[ed] on the fundamental reality of economic interests over ‘cultural politics,’” which Liu said is “to throw minorities and gays and women under the bus.” In both 1856 and 2008, one side argues that social justice ought to precede economic justice—the other side the reverse.

As it happens, Yglesias comes down on the same side as Quincy, Willis, and Liu. In 2009, for example, Yglesias criticized “prosperous straight white intellectuals” for seeing “the past forty years” as “a period of relentless defeat for left-wing politics” due to “growing income inequality.” This is, in other words, an updated version of Quincy’s argument: the charge Quincy was leveling at Democratic politicians in 1856 was that by promoting economic populism, they were colluding in a system of racial oppression. If there were, as Leonard Richards says, more than three hundred Northern politicians (mostly Democrats) who fit Quincy’s description of doughfaces, while meanwhile the slaveowners (as Quincy said in 1856) “writhe[d] in agonies of fear at the very mention of human rights as applicable to people of color,” then clearly the doughfaces were promoting economic equality at the expense of the enslaved. Given the oppression faced by those enslaved, Quincy’s obvious disgust is entirely understandable.

Yet, where Quincy’s argument from a century and a half ago has a certain sense given the realities of the United States in 1856, the logic of Yglesias’ 2009 position is not nearly so straightforward. Lighten up, Yglesias advised those “straight white intellectuals” then, with their petty concerns about economic inequality: “these past forty years have also seen enormous advances in the practical opportunities available to women, a major decline in the level of racism … [and] wildly more public and legal acceptance of gays and lesbians.” What you lose on the swings, you gain on the roundabouts.

The problem however is that Yglesias’ advice doesn’t cohere: assuming that Yglesias is right about the “enormous advances,” then why would it make sense to continue to advocate advancing minority rights to the exclusion of addressing economic inequality? That lapse in logic, however, is not nearly as disturbing as Yglesias’ response to a forum conducted in the pages of the Boston Review in 2012, about a paper by Princeton professor Martin Gilens and his Northwestern colleague Benjamin I. Page. Entitled “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens,” the nut of that paper by the two political scientists was “that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.” In other words, Gilen’s and Page’s work demonstrated what radical critics have long asserted (without the fine-grained detail Gilen and Page provide): that the concerns of the “little guy” (and gal) have more or less zero impact on American policy. It’s a paper that ought to be deeply worrying to most Americans—yet to Yglesias, as we shall see, it’s just jake.

I say this because Yglesias begins his response to Gilens by saying that he “struggled to think of another essay that brings such excellent data and analytical power to bear on an issue while reaching such a fundamentally wrong-headed conclusion,” and ends by berating Gilens for “pining for a world in which policy outputs precisely reflect the views of the public”—which Yglesias thinks “is neither here nor there in terms of obtaining a better political system.” In other words, the input of voters is irrelevant: “policy responsiveness to public opinion,” Yglesias writes, “is not particularly important.” To Yglesias, it seems, the notion that—as some bumpkin once remarked—the United States is, or ought to be, a “government of the people, by the people, for the people,” is a ridiculous idea held by rubes and suckers.

The trouble with the “doughfaces,” Josiah Quincy said, was that they espoused (or feigned) a concern for economic equality while secretly colluding with a system of racial discrimination. By that definition, obviously, Yglesias cannot be considered a “doughface”: not only does he not live in the 1850s, he is advocating—as Quincy did—fighting discrimination before fighting inequality, whereas doughfaces backed the opposite plan. But in doing so, Quincy said, the doughfaces “rendered the Constitution of the United States a blank letter”: what was the point of voting in the Free States, after all, if that vote was merely traded away in Washington? Yet while the doughfaces might have, in effect, short-circuited the democratic will of the people in that fashion—without them, it’s possible that slavery might have ended by a simple majority vote—I don’t know of any historical “doughfaces” that actually attacked the very idea of democracy, as Yglesias does. In that sense, I suppose it could be said that Matthew Yglesias is not a doughface—the nineteenth-century friends of slavery, who went so far (as John Calhoun did) to call slavery a “positive good,” never dared to call for the straight-up abolition of democratic government in favor of a state based on identity: even Jefferson Davis never called the act of voting itself into question. So, yep, Matthew Yglesias is not doughface: he just believes that liberal democracy is obsolete and that racial and other identities are more important terms of analysis than any other. You know, like these guys.

How proud the (predominantly Jewish) Yglesiases, and Harvard, must be.

Advertisements

I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

Baal

Just as ancient Greek and Roman propagandists insisted, the Carthaginians did kill their own infant children, burying them with sacrificed animals and ritual inscriptions in special cemeteries to give thanks for favours from the gods, according to a new study.
The Guardian, 21 January 2014.

 

Just after the last body fell, at three seconds after 9:40 on the morning of 14 December, the debate began: it was about, as it always is, whether Americans ought to follow sensible rules about guns—or whether they ought to be easier to obtain than, say, the right to pull fish out of the nearby Housatonic River. There’s been a lot of words written about the Sandy Hook killings since the day that Adam Lanza—the last body to fall—killed 20 children and six adults at the elementary school he once attended, but few of them have examined the culpability of some of the very last people one might expect with regard to the killings: the denizens of the nation’s universities. After all, it’s difficult to accuse people who themselves are largely in favor of gun control of aiding and abetting the National Rifle Association—Pew Research reported, in 2011, that more than half of people with more than a college degree favored gun control. And yet, over the past several generations a doctrine has gained ground that, I think, has not only allowed academics to absolve themselves of engaging in debate on the subject of gun control, but has actively harmed the possibility of accomplishing it.

Having said that, of course, it is important to acknowledge that virtually all academics—even those who consider themselves “conservative” politically—are in favor of gun control: when for example Texas passed a law legalizing carrying guns on college campus recently Daniel S. Hamermesh, a University of Texas emeritus professor of economics (not exactly a discipline known for its radicalism), resigned his position, citing a fear for his own and his students’ safety. That’s not likely accidental, because not only do many academics oppose guns in their capacities as citizens, but academics have a special concern when it comes to guns: as Firmin DeBrabander, a professor of philosophy at the Maryland Institute College of Art argued in the pages of Inside Higher Ed last year, against laws similar to Texas’, “guns stand opposed” to the “pedagogical goals of the classroom” because while in the classroom “individuals learn to talk to people of different backgrounds and perspectives,” guns “announce, and transmit, suspicion and hostility.” If anyone has a particular interest in controlling arms, in other words, it’s academics, being as their work is particularly designed to foster what DeBrabander calls “open and transformative exchange” that may air “ideas [that] are offensive.” So to think that academics may in fact be an obstacle towards achieving sensible policies regarding guns might appear ridiculous on the surface.

Yet there’s actually good reason to think that academic liberals bear some responsibility for the United States’ inability to regulate guns like every other industrialized—I nearly said, “civilized”—nation on earth. That’s because changing gun laws would require a specific demands for action, and as political science professor Adolph Reed, Jr. of the University of Pennsylvania put the point not long ago in Harper’s, these days the “left has no particular place it wants to go.” That is, to many on campus and off, making specific demands of the political sphere is itself a kind of concession—or in other words, as journalist Thomas Frank remarked a few years ago about the Occupy Wall Street movement, today’s academic left teaches that “demands [are] a fetish object of literal-minded media types who stupidly crave hierarchy and chains of command.” Demanding changes to gun laws is, after all, a specific demand, and to make specific demands is, from this sophisticated perspective, a kind of “sell out.”

Still, how did the idea of making specific demands become a derided form of politics? After all, the labor movement (the eight-hour day), the suffragette movement (women’s right to vote) or the civil rights movement (an end to Jim Crow) all made specific demands. How then has American politics arrived at the diffuse and essentially inarticulable argument of the Occupy movement—a movement within which, Elizabeth Jacobs claimed in a report for the Brookings Institute while the camp in Zuccotti Park still existed, “the lack of demands is a point of pride?” I’d suggest that one possible way the trick was turned was through a 1967 article written by one Robert Bellah, of Harvard: an article that described American politics, and its political system, as a “civil religion.” By describing American politics in religious rather than secular terms, Bellah opened the way towards what some have termed the “non-politics” of Occupy and other social movements—and incidentally, allow children like Adam Lanza’s victims to die.

In “Civil Religion in America,” Bellah—who received his bachelor’s from Harvard in 1950, and then taught at Harvard until moving to the University of California at Berkeley in 1967, where he continued until the end of his illustrious career—argued that “few have realized that there actually exists alongside of and rather clearly differentiated from the churches an elaborate and well-institutionalized civil religion in America.” This “national cult,” as Bellah terms it, has its own holidays: Thanksgiving Day, Bellah says, “serves to integrate the family into the civil religion,” while “Memorial Day has acted to integrate the local community into the national cult.” Bellah also remarks that the “public school system serves as a particularly important context for the cultic celebration of the civil rituals” (a remark that, incidentally, perhaps has played no little role in the attacks on public education over the past several decades). Bellah also argues that various speeches by American presidents like Abraham Lincoln and John F. Kennedy are also examples of this “civil religion” in action: Bellah spends particular time with Lincoln’s Gettysburg Address, which he notes that poet Robert Lowell observed is filled with Christian imagery, and constitutes “a symbolic and sacramental act.” In saying so, Bellah is merely following a longstanding tradition regarding both Lincoln and the Gettysburg Address—a tradition that, however, that does not have the political valence that Bellah, or his literal spiritual followers, might think it does.

“Some think, to this day,” wrote Garry Wills of Northwestern University in his magisterial Lincoln at Gettysburg: The Words that Remade America, “that Lincoln did not really have arguments for union, just a kind of mystical attachment to it.” It’s a tradition that Wills says “was the charge of Southerners” against Lincoln at the time: after the war, Wills notes, Alexander Stephens—the only vice president the Confederate States ever had—argued that the “Union, with him [Lincoln], in sentiment rose to the sublimity of a religious mysticism.” Still, it’s also true that others felt similarly: Wills points out that the poet Walt Whitman wrote that “the only thing like passion or infatuation” in Lincoln “was the passion for the Union of these states.” Nevertheless, it’s a dispute that might have fallen by the historical wayside if it weren’t for the work of literary critic Edmund Wilson, who called his essay on Lincoln (collected in a relatively famous book Patriotic Gore: Studies in the Literature of the American Civil War) “The Union as Religious Mysticism.” That book, published in 1962, seems to have at least influenced Lowell—the two were, if not friends, at least part of the same New York City literary scene—and through Lowell Bellah, seems plausible.

Even if there was no direct route from Wilson to Bellah, however, it seems indisputable that the notion—taken from Southerners—concerning the religious nature of Lincoln’s arguments for the American Union became widely transmitted through American culture. Richard Nixon’s speechwriter, William Safire—since a longtime columnist for the New York Times—was familiar with Wilson’s ideas: as Mark Neely observed in his The Fate of Liberty: Abraham Lincoln and the Civil Liberties, on two occasions in Safire’s novel Freedom, “characters comment on the curiously ‘mystical’ nature of Lincoln’s attachment to the Union.” In 1964, the theologian Reinhold Niebuhr published an essay entitled “The Religion of Abraham Lincoln,” while in 1963 William J. Wolfe of the Episcopal Theological School of Cambridge, Massachusetts claimed that “Lincoln is one of the greatest theologians in America,” in the sense “of seeing the hand of God intimately in the affairs of nations.” Sometime in the early 1960s and afterwards, in other words, the idea took root among some literary intellectuals that the United States was a religious society—not one based on an entirely secular philosophy.

At least when it comes to Lincoln, at any rate, there’s good reason to doubt this story: far from being a religious person, Lincoln has often been described as non-religious or even an atheist. His longtime friend Jesse Fell—so close to Lincoln that it was he who first suggested what became the famous Lincoln-Douglas debates—for instance once remarked that Lincoln “held opinions utterly at variance with what are usually taught in the church,” and Lincoln’s law partner William Herndon—who was an early fan of Charles Darwin’s—said that the president also was “a warm advocate of the new doctrine.” Being committed to the theory of evolution—if Lincoln was—doesn’t mean, of course, that the president was therefore anti-religious, but it does mean that the notion of Lincoln as religious mystic has some accounting to do: if he was, it apparently was in no very simple way.

Still, as mentioned the view of Lincoln as a kind of prophet did achieve at least some success within American letters—but, as Wills argues in Lincoln at Gettysburg, that success has in turn obscured what Lincoln really argued concerning the structure of American politics. As Wills remarks for instance, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster, and few if any have considered Webster a mystic.” Webster’s views, in turn, descend from a line of American thought that goes back to the Revolution itself—though its most significant moment was at the Constitutional Convention of 1787.

Most especially, to one James Wilson, a Scottish emigrant, delegate to the Constitutional Convention of 1787, and later one of the first justices of the Supreme Court of the United States. If Lincoln got his notions of the Union from Webster, then Webster got his from Supreme Court Justice Joseph Story: as Wills notes, Theodore Parker, the Boston abolitionist minister, once remarked that “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” Story, for his part, got his notion from Wilson: as Linda Przybyscewski notes in passing in her book, The Republic According to John Marshall Harlan (a later justice), Wilson was “a source for Joseph Story’s constitutional nationalism.” And Wilson’s arguments concerning the constitution—which he had a strong hand in making—were hardly religious.

At the constitutional convention, one of the most difficult topics to confront the delegates was the issue of representation: one of the motivations for the convention itself, after all, was the fact that under the previous terms of government, the Articles of Confederation, each state, rather than each member of the Continental Congress, possessed a vote. Wilson had already, in 1768, attacked the problem of representation as being one of the foremost reasons for the Revolution itself—the American colonies were supposed, by British law, to be fully as much British subjects as a Londoner or Mancunian, but yet had no representation in Parliament: “Is British freedom,” Wilson therefore asked in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament, “denominated from the soil, or from the people, of Britain?” That question was very much the predecessor of the question Wilson would ask at the convention: “For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land.

Wilson also made an argument that would later be echoed by Lincoln: he drew attention to the disparities of population between the several states. At the time of the convention, Pennsylvania—just as it is today—was a much more populous state than New Jersey was, a difference that made no difference under the Articles of Confederation, under which all states had the same number of votes: one. “Are not the citizens of Pennsylvania,” Wilson therefore asked the Convention, “equal to those of New Jersey? Does it require 150 of the former to balance 50 of the latter?” This argument would later be echoed by Lincoln, who, in order to illustrate the differences between free states and slave states, would—in October of 1854, at Peoria, in the speech that would mark his political comeback—note that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

The point of attack for both men, in other words, was precisely the same: the matter of representation in terms of what would later be called a “one man, one vote” standard. It’s an argument that hardly appears “mystical” in nature: since the matter turns, if anything, upon ratios of numbers to each other, it seems more aposit to describe the point of view adopted here as, if anything, “scientific”—if it weren’t for the fact that even the word “scientific” seems too dramatic a word for a matter that appears to be far more elemental.

Were Lincoln or Wilson alive today, then, it seems that the first point they might make about the gun control debate is that it is a matter about which the Congress is greatly at variance with public opinion: as Carl Bialik reported for FiveThirtyEight this past January, whenever Americans are polled “at least 70 percent of Americans [say] they favor background checks,” and furthermore that an October 2015 poll by CBS News and the New York Times “found that 92 percent of Americans—including 87 percent of Republicans—favor background checks for all gun buyers.” Yet, as virtually all Americans are aware, it has become essentially impossible to pass any sort of sensible legislation through Congress: a fact dramatized this spring by a “sit-down strike” in Congress by congressmen and congresswomen. What Lincoln and Wilson might further say about the point is that the trouble can’t be solved by such a “religious” approach: instead, what they presumably would recommend is that what needs to change is a system that inadequately represents the people. That isn’t the answer that’s on offer from academics and others on the American left, however. Which is to say that, soon enough, there will be another Adam Lanza to bewail—another of the sacrifices, one presumes, that the American left demands Americans must make to what one can only call their god.

To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

The Commanding Heights

The enemy increaseth every day; 
We, at the height, are ready to decline.
Julius Caesar. Act IV, Scene 3.

 

“It’s Toasted”: the two words that began the television series Mad Men. The television show’s protagonist, Don Draper, comes up with them in a flash of inspiration during a meeting with the head of Draper’s advertising firm’s chief client, cigarette brand Lucky Strikes: like all cigarette companies, Luckies have to come up with a new campaign in the wake of a warning from the Surgeon General regarding the health risks of smoking. Don’s solution is elegant: by simply describing the manufacturing process of making Luckies—a process that is essentially the same as all other cigarettes—the brand does not have to make any kind of claim about smokers’ health at all, and thusly can bypass any consideration of scientific evidence. It’s a great way to introduce a show about the advertising business, as well as one of the great conflicts of that business: the opposition between reality, as represented by the Surgeon General’s report, and rhetoric, as represented by Draper’s inspirational flash. It’s also what makes Mad Men a work of historical fiction: in the first place, as documented by Thomas Frank’s The Conquest of Cool: Business Culture, Counterculture, and the Rise of Hip Consumerism, there really was, during the 1950s and 60s, a conflict in the advertising industry between those who trusted in a “scientific” approach to advertising and those who, in Frank’s words, “deplored conformity, distrusted routine, and encouraged resistance to established power.” But that conflict also enveloped more than the advertising field: in those years many rebelled against a “scientism” that was thought confining—a rebellion that in many ways is with us still. Yet, though that rebellion may have been liberating in some senses, it may also have had certain measurable costs to the United States. Among those costs, it seems, might be height.

Height, or a person’s stature, of course is a thing that most people regard as something that is akin to the color of the sky or the fact of gravity: a baseline foundation to the world incapable of change. In the past, such results that lead one person to tower over others—or look up to them in turn—might have been ascribed to God; today some might view height as the inescapable result of genetics. In one sense, this is true: as Burkhard Bilger says in the New Yorker story that inspired my writing here, the work of historians, demographers and dietitians have shown that with regard to height, “variations within a population are largely genetic.” But while height differences within a population are, in effect, a matter of genetic chance, that is not so when it comes to comparing different populations to each other.

“Height,” says Bilger, “is a kind of biological shorthand: a composite code for all the factors that make up a society’s well-being.” In other words, while you might be a certain height, and your neighbor down the street might be taller or shorter, both of you will tend to be taller or shorter than people from a different country—and the degree of shortness or tallness can be predicted by what sort of country you live in. That doesn’t mean that height is independent of genetics, to be sure: all human bodies are genetically fixed to grow at only three different stages in our lives—infancy, between the ages of six and eight, and as adolescents. But as Bilger notes, “take away any one of forty-five or fifty essential nutrients”—at any of these stages—“and the body stops growing.” (Like iodine, which can also have an effect on mental development.) What that means is that when large enough populations are examined, it can be seen whether a population as a whole is getting access to those nutrients—which in turn means it’s possible to get a sense of whether a given society is distributing resources widely … or not.

One story Bilger tells, about Guatemala’s two main ethnic groups, illustrates the point: one of them, the Ladinos, who claim descent from the Spanish colonizers of Central America, were averagely tall. But the other group, the Maya, who are descended from indigenous people, “were so short that some scholars called them the pygmies of Central America: the men averaged only five feet two, the women four feet eight.” Since the two groups shared the same (small) country, with essentially the same climate and natural resources, researchers initially assumed that the difference between them was genetic. But that assumption turned out to be false: when anthropologist Barry Bogin measured Mayans who had emigrated to the United States, he found that they were “about as tall as Guatemalan Ladinos.” The difference between the two ethnicities was not genetic: “The Ladinos,” Bilger writes, “who controlled the government, had systematically forced the Maya into poverty”—and poverty, because it can limit access to the nutrients essential during growth spurts, is systemically related to height.

It’s in that sense that height can literally be a measurement of the degree of freedom a given society enjoys: historically, Guatemala has been a hugely stratified country, with a small number of landowners presiding over a great number of peasants. (Throughout the twentieth century, in fact, the political class was engaged in a symbiotic relationship with the United Fruit Company, an American company that possessed large-scale banana plantations in the country—hence the term “banana republic.”) Short people are, for the most part, oppressed people; tall people, conversely, are mostly free people: it’s not an accident that as citizens of one of the freest countries in the world, the Netherlands, Dutch people are also the tallest.

Americans, at one time, were the tallest people in the world: in the eighteenth century, Bilger reports, Americans were “a full three inches taller than the average European.” Even so late as the First World War, he also says, “the average American soldier was still two inches taller than the average German.” Yet, a little more than a generation later, that relation began to change: “sometime around 1955 the situation began to reverse.” Since then all Europeans have been growing, as have Asians: today “even the Japanese—once the shortest industrialized people on earth—have nearly caught up with us, and Northern Europeans are three inches taller and rising.” Meanwhile, American men are “less than an inch taller than the average soldier during the Revolutionary War.” And that difference, it seems, is not due to the obvious source: immigration.

The people that work in this area are obviously aware that, because the United States is a nation of immigrants, that might skew the height data: clearly, if someone grows up in, say, Guatemala and then moves to the United States, that could conceivably warp the results. But the researchers Bilger consulted have considered the point: one only includes native-born, English-speaking Americans in his studies, for example, while another says that, because of the changes to immigration law during the twentieth century, the United States now takes in far too few immigrants to bias the figures. But if not immigration, then what?

For my own part, I find the coincidence of 1955 too much to ignore: it’s around the mid-1950s that Americans began to question a previous view of the sciences that had grown up a few generations previously. In 1898, for example, the American philosopher John Dewey could reject “the idea of a dualism between the cosmic and the ethical,” and suggested that “the spiritual life … [gets] its surest and most ample guarantees when it is learned that the laws and conditions of righteousness are implicated in the working processes of the universe.” Even so late as 1941, intellectual magazine The New Republic could publish an obituary of the famed novelist James Joyce—author of what many people feel is the finest novel in the history of the English language, Ulysses—that proclaimed Joyce “the great research scientist of letters, handling words with the same freedom and originality that Einstein handles mathematical symbols.” “Literature as pure art,” the magazine then said, “approaches the nature of pure science”—suggesting, as Dewey said, that reality and its study did not need to be opposed to some other force, whether that be considered to be religion and morality or art and beauty. But just a few years later, elite opinion began to change.

In 1949, for instance, the novelist James Baldwin would insist, against the idea of The New Republic’s obituary, that “literature and sociology are not the same,” while a few years later, in 1958, the philosopher and political scientist Leo Strauss would urge that the “indispensable condition of ‘scientific’ analysis is then moral obtuseness”—an obtuseness that, Strauss would go on to say, “is not identical with depravity, but […] is bound to strengthen the forces of depravity.” “By the middle of the 1950s,” as Thomas Frank says, “talk of conformity, of consumerism, and of the banality of mass-produced culture were routine elements of middle-class American life”—so that “the failings of capitalism were not so much exploitation and deprivation as they were materialism, wastefulness, and soul-deadening conformity”: a sense that Frank argues provided fuel for the cultural fires of the 1960s that were to come, and that the television show Mad Men documents. In other words, during the 1950s and afterwards, Americans abandoned a scientific outlook, and meanwhile, Americans also have grown shorter—at least relative to the rest of the world. Correlation, as any scientist will tell you, does not imply causation, but it does imply that Lucky Strikes might not be unique any more—though as any ad man would tell you, “America: It’s Toast!” is not a winning slogan.

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.