Stack and Tilt

His experience … had disabused him of any hope that the government would intercede to prevent rich corporations from doing bad things to poor people.
The Big Short: Inside the Doomsday Machine 

 

Spend enough time with it, and eventually golf can seem to intersect with everything. “It looks like,” for instance, the character Vinny remarks at a pivotal moment in the new film, The Big Short, “someone hit a piñata of white guys who suck at golf.” In Michael Lewis’ original book, The Big Short: Inside the Doomsday Machine, Steve Eisman—renamed “Mark Baum” and played by Steve Carrell for the film—is Vinny’s nominal boss, but Eisman is also dependent on Vinny because, Eisman says, “‘I can’t add … I think in stories. I need help with numbers.’” The story told through these two characters, among others, is a story about numbers: how five trillions of dollars went up in smoke in 2008. But there’s also a hidden story that The Big Short only implicitly tells: a story about how objections to power became corralled into disciplines concerned with stories, not numbers—and how it turns out that numbers just might be more helpful than stories.

To put the point another way, the distinction between stories and numbers made by Steve Eisman just might be as ridiculous as the loans American banks made in the early twenty-first century—and even more toxic. It makes little difference, after all, to the man dying of thirst whether his lifesaving water got carried in a jug or a canteen, on camel or a truck. Yet because a great segment of the American population actually believes in the distinction between numbers and stories, it’s arguably become impossible to advance arguments that might have stood in the way of the financial meltdown. One of those stories would be the tale of two philosophers: one a Frenchman and the other a surfer.

The Frenchman is known today as “Condorcet,” although his full name was Marie Jean Antoine Nicolas de Caritat, the Marquis de Condorcet, who died mysteriously while in official custody in 1794. Elected to the National Assembly of France in 1791—two years after the beginning of the Revolution in 1789—he was a member of the moderate “Girondin” faction: those who believed in a republic, and not a monarchy, but were not willing to go so far as the Montagnard radicals. He is remembered today for his outspoken defense of the Enlightenment, and also for his early work in what has become known as the discipline of political science, particularly in regards to voting—work that, as it happens, also helps to explain just how it was that the 2008 disaster happened.

Michael Lewis’ book, like the film, explains what happened in the American housing market over the last few decades admirably clearly. (Though without the film’s celebrity cameos.) Essentially, Wall Street firms learned how to package house loans into enormous bundles of many different house loans—and then, turn those collections into bonds. Each of those collections were a kind of “tower,” and at the top of each tower—in Wall Street vernacular, a “tranche”—was the least-risky loans, the loans to people highly unlikely to default. This was the “triple-A” tranche; underneath that level was the “double-A” tranche, and so forth down to “the riskiest, triple-B tranche”: the tranche composed of loans made to people most likely to default on their debts.

Now, to skip past a great deal of exposition, the reason why the American housing market eventually blew up—and nearly took the world’s economy with it—was that, as Lewis writes, “the rating agencies”—i.e., those charged with saying what loans were “triple-A” or “triple-B”—“presented with the pile of bonds backed by dubious loans, would pronounce 80 percent of the bonds in it triple-A.” These could then be re-packaged into new towers of debt, all of which could then be re-divided into triple-A and so forth, down to triple-B again. The housing market, in effect, became a game of three-card monte, in which it became impossible to tell where the marble of “good” loans was among all the bad ones.

The question however is why investors ever agreed to stacking up multiple loans into towers in the first place—and that is where our dead French nobleman, the marquis, comes in. For it was Condorcet who explained most penetratingly why it might be beneficial to make big piles out of smaller ones—though Condorcet’s purpose was a political, not an economic, one.

In his 1785 work, “Essay on the Application of Analysis to the Probability of Majority Decisions,” Condorcet laid out the same logic that would later lead Wall Street slicksters to get the idea of bundling loans. In what would later be called “Condocet’s jury theory,” the aristocrat attempted to show—as James Hawthorne of the University of Oklahoma has written—

that if the number of voters … is sufficiently large and the average of their individual propensities to select the better of two policy proposals is a little above random chance … then the majority is extremely likely to select the better alternative.

Or, as the introduction to a textbook of Condorcet’s work has put it, “the better the jurors’ judgment and/or the larger the majority, the more certain one can be that a decision is correct.” In other words—contrary to the adage—Condorcet is arguing that the more cooks, the better the resulting dish.

That, to put it mildly, is not what many people today think: as the proverb goes, our expectation is that the more people involved with a decision, the likelier it is to be a bad one. Condorcet himself agrees with that notion—so long, he argues, as the people involved are worse than random at finding the correct solution. In that case, he says, then yes: the more people involved the less likely it is that the group will arrive at the right answer.

If the group, however, is composed of people who are better than random at finding the correct solution, then Condorcet tells us that the more people involved, the more likely it is that the group will find the correct answer. (It doesn’t guarantee that the group will, mind—it just makes it more likely.) The reason why is the same reason that Wall Street wanted to lump bad loans in among the good loans: the idea in both cases is the elementary one that, with enough good loans, the likelihood of a bad loan’s failure will be counteracted by the likelihood of a good loan’s success.

Or to put it another way, it’s similar to the reason everyone hated the smart kid in high school who would “blow the curve” on the big test: that kid’s results could raise the overall average of the class, and thusly a score that otherwise would be “average” will now be “below average.” By putting enough loans together, Wall Street argued that they could overcome the risk that all, or any, of the individual loans that underlay the bonds would fail.

As Condorcet could have told them, however, the mathematics behind Wall Street’s argument only make sense given the assumption that the individuals (whether loans or voters) within the groupings (towers of loans or groups of voters) were more likely to be “good”—that is, vote correctly, or, in the case of loans, not default—than “bad.” If more “bad” loans (or dumb voters) are stacked together, Condorcet would say, then it becomes more, not less, likely that the whole tower will implode—and, to extend the metaphor into perhaps-impolite territory, “pancake” those caught beneath it. Effectively, Michael Lewis’ book (and the movie made from it) describes how Wall Street did just that: by collecting lots of bad loans together, investment banks in New York and elsewhere effectively built the “Doomsday Machine” that Lewis alludes to in his book’s title.

Now, it’s right about here that many will break with Condorcet: how, they will triumphantly ask, can we tell if we have a set of voters more likely to get the “right” answer than flipping a coin would? And of course, at first examination that is a very difficult question to answer. It’s just here, however, that luckily enough the late surfing philosopher, Donald Davidson, can ride Condorcet’s wave all the way to the beach: through what Davidson called “radical interpretation.”

Although the phrase “radical interpretation” might seem to refer to some kind of surf-influenced theory of reading, in fact Davidson’s idea is very down-to-earth. All it requires is for the reader to imagine herself confronting someone speaking a completely unknown language—one without translators or dictionaries that could help introduce the new language. How to even begin? What, for instance, is even a word in the new language—and what just a grunt? Or, to put the point more concretely, by using an example taken by one of Davidson’s instructors, W.V.O. Quine: if a person speaking the new language happens to point to a nearby rabbit, and makes a sound, does that sound refer to the rabbit—or to the act of pointing?

In such a case, Davidson says, it might appear that we could get no traction on figuring out an interpretation. And yet, there is one thing (he writes in the essay, “Thought and Talk”) we know for an absolute certainty: “We can, however, take it as given that most beliefs are correct.” We might not know what it is that this new person is saying, in other words, but we can know that most of that person’s (or any person’s) beliefs about the world are true. Why? Well, because that person is alive.

The implication in short is that, by surviving, any animal—and human beings are a kind of animal—has demonstrated that they are sufficiently “in tune” with their environment. An animal that, say, believed water was poison, and thus avoided water so strenuously as literally to die of dehydration would, obviously, be “out of tune” with the environment. The fact that something is alive is therefore evidence that, while in some case that animal might think something in error, overall that animal must be doing something “right.” The suggestion then is that most people, while they might believe some foolish things, for the most part believe true things: otherwise, they’d be dead.

That’s a truly significant, if usually ignored fact, about the state of being alive: as the British biologist Richard Dawkins once remarked, “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead.” What being alive means, after all, is that you (for I presume you are alive) are the final product of an incredibly vast series of improbabilities over oceans of time—and that is not nothing. Merely by surviving, voters might demonstrate that they fulfill the minimum condition prescribed by Condorcet: they have (or their DNA has),  consistently done better than chance. What that leads to is the thought that, while packing loans together is obviously a risky proposition, packing voters is not: unlike collateral debt obligations, expanding the vote is an investment that, nearly necessarily, must pay off.

The argument is a simple one: in the first place, we know that voters must be better than chance in making decisions about their own lives—at least as they have gone so far. Each voter is herself a product of thousands, if not millions, of years of other presumably better-than-chance decisions. We could perhaps therefore presume that such voters are more likely than not to be better-than-chance decision-makers—fulfilling Condorcet’s preconditions. Furthermore, we also know that if those preconditions are met, then we thus also know that making more of them—that is, making more voters—must also result in a greater likelihood of the correct option being chosen, whether that is the right candidate selected or the right alternative picked in a referendum. QED.

Or so, at least, it could be argued. What’s interesting to me about the present is that despite everything that has happened to the United States—and for that matter, the rest of the world—since 2008 (and, perhaps, before that), no one has advanced, so far as I am aware, a similar sort of argument in public. Of course, that could be ascribed to the argument’s own weaknesses: whereas the above is an armchair line of thought at best, it’s at least debatable how competent individual voters are in reality. That is not even to approach the criticism that while in some global sense the logic of the above might be sound—a fairly large assumption—there could be no way to tell in any concrete sense whether these particular voters have actually demonstrated competence or are just merely themselves the result of chance. Any sufficiently hardheaded operator could easily advance counterarguments just as superficially appealing as the above argument might (or might not) be.

And yet.

Any counterargument after all would effectively be taking on the storytelling problem of The Big Short: the fact that, as for instance Jake Coyle of the Associated Press pointed out in his review of the film, the “heroes” eventually realize that “to their horror and immense profit … they’ve effectively bet against America, and won.” To say that you don’t believe in ensuring that every American votes, in other words, is to doubt the wisdom of the American voter, which is also to say that you don’t believe in Americans.

Or, that you don’t believe in America.

So why doesn’t the Left, if there is such a thing, advance an argument like that? Well, to my mind it’s because to a lot of the best, most articulate and sharpest undergraduates today, the sciences are presented as “the enemy” of the political: as the cognitive psychologist Steven Pinker noted a few years ago in the pages of The New Republic, while everyone knows that to the Christian right “reviled is the application of scientific reasoning to religion,” it’s less well-remarked how  “the intrusion of science into the humanities has been deeply resented.” In order to talk about politics in America these days then it’s first necessary to refuse to talk about science—even though, in many ways, an understanding of science is most crucial kind of knowledge the United States (and the world) will need in the future.

The problem of America, then—perhaps the reason why The Big Short could even happen—is that there are too many American Steves, and not enough Vinnies. Both “the left” and “the right” have gone out of their way, for decades now, to denounce the supposed tyranny of the sciences—what’s sometimes called “scientism.” According to those who make this critique—and they belong to “both” sides of the political aisle—the nation is about to fall prey to a set of robotic would-be rulers who will stifle all “creativity,” a trait only possessed by artists and businessmen.

But then again, maybe it’s just that those people suck at more than golf.

Advertisements

High Anxiety

Now for our mountain sport …

Cymbeline 
Act III, Scene 3

High Hampton

Wade Hampton Golf Club Sign

Entrances to Wade Hampton Golf Club and High Hampton Inn and Country Club, North Carolina

Walt Whitman once said, as anyone who saw Bull Durham knows, that baseball would function to draw America together after the Civil War: the game, the poet said, would “repair our losses and be a blessing to us.” Many Americans have not lost this belief in the redemptive power of sports: as recently as 2011 John Boehner, then-Speaker of the House of Representatives, played a much-ballyhooed round of golf with President Barack Obama—along with many other outlets, Golf Digest presented the event as presaging a new era of American unity: the “pair can’t possibly spend four hours keeping score, conceding putts, complimenting drives, filling divots, retrieving pond balls, foraging for Pro V1s and springing for Kit Kats off the snack cart,” argued the magazine, “without finding greater common ground.” Golf would thusly be the antidote to what the late Columbia University history professor Richard Hofstadter, in 1964, called the “paranoid style”: the “heated exaggeration, suspiciousness, and conspiratorial fantasy” that Hofstadter found to be a common theme in American politics then and whose significance has seemingly only grown since. Yet, while the surface approval of the “golf summit” seemed warranted because golf is, after all, a game that cannot really be played without trust in your opponents—it’s only on the assumption that everyone is honest that the game can even work—as everyone knows by now the summit failed: Boehner was, more or less, forced out of office this summer by those members of his party who, Boehner said, got “bent out of shape” over his golf with the president. While golf might, in other words, furnish a kind of theoretical model for harmonious bipartisanship, in practice it has proved largely useless for preventing political polarization—a result that anyone who has traveled Highway 107 in western North Carolina might have realized. Up there, among the Great Smoky Mountains, there sits a counterexample to the dream of political consensus: the Wade Hampton Golf Club.

Admittedly, that a single golf club could be strong enough evidence as to smack down the flights of fancy of a Columbia University professor like Hofstadter—and a Columbia University alumni like Barack Obama—might appear a bit much: there’s a seeming disconnect between the weightiness of the subject matter and the evidential value of an individual golf club. What could the existence of the Wade Hampton Golf Club add (or detract) from Hofstadter’s assertions about the dominance of this “paranoid style,” examples of which range from the anti-Communist speeches of Senator Joseph McCarthy in the 1950s to the anti-Catholic, “nativist” movements of the 1830s and 1840s to the Populist denunciations of Wall Street during the 1890s? Yet, the existence of the Wade Hampton Golf Club does constitute strong evidence against one of the pieces of evidence Hofstadter adduces for his argument—and in doing so unravels not only the rest of Hofstadter’s spell like a kitten does a ball of string, but also the fantasy of “bipartisanship.”

One of the examples of “paranoia” Hofstadter cited, in other words, was the belief held by “certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy”—a view that, Hofstadter implied, was not much different than the contemporary belief that fluoridation was a Soviet plot. But a growing number of historians now believe that Hofstadter was wrong about those abolitionists: according to historian Leonard Richards of the University of Massachusetts, for instance, there’s a great deal of evidence for “the notion that a slaveholding oligarchy ran the country—and ran it for their own advantage” in the years prior to the Civil War. The point is more than an academic one: if it’s all just a matter of belief, then the idea of bipartisanship makes a certain kind of sense; all that matters is whether those we elect can “get along.” But if not, then that would suggest that what matters is building the correct institutions, rather than electing the right people.

Again, that seems like rather more question than the existence of a golf club in North Carolina seems capable of answering. The existence of the Wade Hampton Golf Club however tends to reinforce Richards’ view if, for nothing else, on its name alone: the very biography of the man the golf club was named for, Wade Hampton III, lends credence to Richards’ notion about the real existence of a slave-owning, oligarchical conspiracy because Hampton was after all not only a Confederate general during the Civil War, but also the possessor (according to the website for the Civil War Trust, which attempts to preserve Civil War battlefields) of “one of the largest collections of slaves in the South.” Hampton’s career, in other words, demonstrates just how entwined slaveowners were with the “cause” of the South—and if secession was largely the result of a slave-owning conspiracy during the winter of 1860, it becomes a great deal easier to think that said conspiracy did not spring fully grown only then.

Descended from an obscenely wealthy family whose properties stretched from near Charleston in South Carolina’s Lowcountry to Millwood Plantation near the state capital of Columbia and all the way to the family’s summer resort of “High Hampton” in the Smokies—upon the site of which the golf club is now built—Wade Hampton was intimately involved with the Southern cause: not only was he one of the richest men in the South, but at the beginning of the war he organized and financed a military unit (“Hampton’s Legion”) that would, among other exploits, help win the first big battle of the war, near the stream of Bull Run. By the end of the war Hampton became, along with Nathan Bedford Forrest, the only man without prior military experience to achieve the rank of lieutenant general. In that sense, Hampton was exceptional—only eighteen other Confederate officers achieved that rank—but in another he was representative: as recent historical work shows, much of the Confederate army had direct links to slavery.

As historian Joseph T. Glatthaar has put the point in his General Lee’s Army: From Victory to Collapse, “more than one in every four volunteers” for the Confederate army in the first year of the war “lived with parents who were slaveholders”—as compared with the general population of the South, in which merely one in every twenty white persons owned slaves. If non-family members are included, or if economic connections like those to whom soldiers rented land or sold crops prior to the war are allowed, then “the vast majority of the volunteers of 1861 had a direct connection to slavery.” And if the slaveowners could create an army that could hold off the power of the United States for four years, it seems plausible they might have joined together prior to outright hostilities—which is to say that Hofstadter’s insinuations about the relative sanity of “certain” abolitionists (among them, Abraham Lincoln) don’t have the same value as they may once have.

After all, historians have determined that the abolitionists were certainly right when they suspected the motives of the slaveowners. “By itself,” wrote Roger Ransom of the University of California not long ago, “the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war … [in] the fall of 1860.” “On the eve of the war,” as another historian noted in the New York Times, “cotton comprised almost 60 percent of America’s exports,” and the slaves themselves, as yet another historian—quoted by Ta-Nehisi Coates in The Atlantic—has observed, were “the largest single financial asset in the entire U.S. economy, worth more than all manufacturing and railroads combined.” Collectively, American slaves were worth 3.5 billion dollars—at a time when the entire budget for the federal government was less than eighty million dollars. Quite literally, in other words, American slaveowners could buy the entire U.S. government roughly forty three times over.

Slaveowners thusly had, in the words of a prosecutor, both means and motive to revolt against the American government; what’s really odd about the matter, however, is that Americans have ever questioned it. The slaveowners themselves fully admitted the point at the time: in South Carolina’s “Declaration of the Immediate Causes which Adduce and Justify the Secession of South Carolina from the Federal Union,” for instance, the state openly lamented the election of a president “whose opinions and purposes are hostile to slavery.” And not just South Carolina: “Seven Southern states had seceded in 1861,” as the dean of American Civil War historians James McPherson has put observed, “because they feared the incoming Lincoln administration’s designs on slavery.” When those states first met together at Montgomery, Alabama, in February of 1861 it took them only four days to promulgate what the New York Times called “a provisional constitution that explicitly recognized racial slavery”; in a March 1861 speech Alexander Stephens, who would become the vice president of the Confederate States of America, argued that slavery was the “cornerstone” of the new government. Slavery was, as virtually anyone who has seriously studied the matter has concluded, the cause motivating the Southern armies.

If so—if, that is, the slaveowners created an army so powerful that it could hold off the power of the United States for four years, simply in order to protect their financial interests in slave-owning—it then seems plausible they might have joined together prior to the beginning of outright hostilities. Further, if there was a “conspiracy” to begin the Civil War, then the claim that there was one in the years and decades before the war becomes just that much more believable. And if that possibility is tenable, then so is the claim by Richards and other historians—themselves merely following a notion that Abraham Lincoln himself endorsed in the 1850s—that the American constitution formed “a structural impediment to the full expression of Northern voting power” (as one reviewer has put it)—and that thusly the answer to political problems is not “bipartisanship,” or in other words, the election of friendlier politicians, but rather structural reform.

Such, at least, might be the lesson anyone might draw from the career of Wade Hampton III, Confederate general—in light of which it’s suggestive that the Wade Hampton Golf Club is not some relic of the nineteenth century. Planning for the club began, according to the club’s website, in 1982; the golf course was not completed until 1987, when it was named “Best New Private Course” by Golf Digest. More suggestive still, however, is the fact that under the original bylaws, “in order to be a member of the club, you [had] to own property or a house bordering the club”—rules that resulted, as one golfer has noted, in a club of “120 charter and founding members, all from below the Mason-Dixon Line: seven from Augusta, Georgia and the remainder from Florida, Alabama, and North Carolina.” “Such folks,” as Bradley Klein once wrote in Golfweek, “would have learned in elementary school that Wade Hampton III, 1818-1902, who owned the land on which the club now sits, was a prominent Confederate general.” That is, in order to become a member of Wade Hampton Golf Club you probably knew a great deal about the history of Wade Hampton III—and you were pretty ok with that.

The existence of the Wade Hampton Golf Club does not, to be sure, demonstrate a continuity between the slaveowners of the Old South and the present membership of the club that bears Hampton’s name. It is, however, suggestive to think that if it is true, as many Civil War historians now say, that prior to 1860 there was a conspiracy to maintain an oligarchic form of government, then what are we to make of a present in which—as former Secretary of Labor Robert Reich recently observed—“the richest one-hundreth of one percent of Americans now hold over 11 percent of the nation’s total wealth,” a proportion greater than at any time since before 1929 and the start of the Great Depression? Surely, one can only surmise, the answer is easier to find than a mountain hideaway far above the Appalachian clouds, and requires no poetic vision to see.

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.