Paper Moon

Say, it’s only a paper moon
Sailing over a cardboard sea
But it wouldn’t be make-believe
If you believed in me
—“It’s Only A Paper Moon” (1933).

 

As all of us sublunaries knows, we now live in a technological age where high-level training is required for anyone who prefers not to deal methamphetamine out of their trailer—or at least, that’s the story we are fed. Anyway, in my own case the urge towards higher training has manifested in a return to school; hence my absence from this blog. Yet, while even I recognize this imperative, the drive toward scientific excellence is not accepted everywhere: as longer-term readers may know, last year Michael Wilbon of ESPN wrote a screed (“Mission Impossible: African-Americans and Analytics”) not only against the importation of what is known as “analytics” into sports—where he joined arms with nearly every old white guy sportswriter everywhere—but, more curiously, essentially claimed that the statistical analysis of sports was racist. “Analytics” seem, Wilbon said, “to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” But while Wilbon may be dismissed because “analytics” is obviously friendlier to black people than many other forms of thought—it seems patently clear that something that pays more attention to actual production than to whether an athlete has a “good face” (as detailed in Moneyball) is going to be, on the whole, less racist—he isn’t entirely mistaken. Even if Wilbon appears, moronically, to think that his “enemy” is just a bunch of statheads arguing about where to put your pitcher in the lineup, or whether two-point jump shots are valuable, he can be taken seriously if he recognizes that his true opponent is none other than—Sir Isaac Newton.

Although not many realize it, Isaac Newton was not simply the model of genius familiar to us today as the maker of scientific laws and victim of falling apples. (A story he may simply have made up in order to fend off annoying idiots—a feeling with which, if you are reading this, you may be familiar.) Newton did, of course, first conjure the laws of motion that, on Boxing Day 1968, led William Anders, aboard Apollo 8, to reply “I think Isaac Newton is doing … the driving now” to a ground controller’s son who asked who was in charge of the capsule—but despite the immensity of his scientific achievements, those were not the driving (ahem) force of his curiosity. Newton’s main interests, as a devout Christian, were instead about ecclesiastical history—a topic that led him to perhaps the earliest piece of “analytics” ever written: an 87,000-word monstrosity the great physicist published in 1728.

Within the pages of this book is one of the earliest statistical studies ever written—or so at least Karl Pearson, called “the founder of modern statistics,” realized some two centuries later. Pearson started the world’s first statistics department in 1911, at the University College London; he either inaugurated or greatly expanded some half-dozen entire scientific disciplines, from meteorology to genetics. When Albert Einstein was a young graduate student, the first book his study group studied was a work of Pearson’s. In other words, while perhaps not a genius on the order of his predecessor Newton or his successor Einstein, Pearson was prepared to recognize a mind that was. More signifcantly, Pearson understood that, as he later wrote in the essay that furnishes the occasion for this one, “it is unusual for a great man even in old age to write absolutely idle things”: when someone immensely intelligent does something, it may not be nonsense no matter how much it might look it.

That’s what led Pearson, in 1928, to publish the short essay of interest here, which is about what could appear like the ravings of a religious madman, but as Pearson saw, weren’t: Newton’s 1728 The Chronology of Ancient Kingdoms amended, to which is prefixed: A Short Chronicle from the First Memory of Things in Europe to the Conquest of Persia by Alexander the Great. As Pearson understood, it’s a work of apparent madness that conceals depths of genius. But it’s also, as Wilbon might recognize (were he informed enough to realize it) it’s a work that is both a loaded gun pointed at African-Americans—and also, perhaps, a very tool of liberation.

The purpose of the section of the Chronology that concerned Pearson—there are others—was what Pearson called “a scientific study of chronology”: that is, Newton attempted to reconstruct the reigns of various kings, from contemporary France and England to the ancient rulers of “the Egyptians, Greeks and Latins” to the kings of Israel and Babylon. By consulting ancient histories, the English physicist compiled lists of various reigns in kingdoms around the world—and what he found, Pearson tells us, is that “18 to 20 years is the general average period for a reign.” But why is this, which might appear to be utterly recondite, something valuable to know? Well, because Newton is suggesting that by using this list and average, we can compare it to any other list of kings we find—and thereby determine whether the new list is likely to be spurious or not. The greater the difference between the new list of kingly reigns and Newton’s calculations of old lists, in short, the more likely it is that the new list is simply made up, or fanciful.

Newton did his study because he wanted to show that biblical history was not simply mythology, like that of the ancient Greeks: he wanted to show that the list of the kings of Israel exhibited all the same signs as the lists of kings we know to have really existed. Newton thereby sought to demonstrate the literal truth of the Bible. Now, that’s not something, as Pearson knew, that anyone today is likely much to care about—but what is significant about Newton’s work, as Pearson also knew, is that what Newton here realized was that it’s possible to use numbers to demonstrate something about reality, which was not something that had ever really been done before in quite this same way. Within Newton’s seeming absurdity, in sum, there lurked a powerful sense—the very same sense Bill James and others have been able to apply to baseball and other sports over the past generation and more, with the result that, for example, the Chicago Cubs (managed by Theo Epstein, Bill James’ acolyte) last year finally won, for the first time in more than a century, the final game of the season. In other words, during that nocturnal November moonshot on Chicago’s North Side last year, Sir Isaac Newton was driving.

With that example in mind, however, it might be difficult to see just why a technique, or method of thinking, that allows a historic underdog finally to triumph over its adversaries after eons of oppression could be a threat to African-Americans, as Michael Wilbon fears. After all, like the House of Israel, neither black people nor Cubs fans are unfamiliar with the travails of wandering for generations in the wilderness—and so a method that promises, and has delivered, a sure road to Jerusalem might seem to be attractive, not a source of anxiety. Yet, while in that sense Wilbon’s plea might seem obscure, even the oddest ravings of a great man can reward study.

Wilbon is right to fear statistical science, that is, for a reason that I have been exploring recently: of all things, the Voting Rights Act of 1965. That might appear to be a reference even more obscure than the descendants of Hammurabi, but in fact not so: there is a statistical argument, in other words, to be derived from Sections Two and Five of that act. As legal scholars know, those two sections form the legal basis of what are known as “majority minority districts”: as one scholar has described them, these are “districts where minorities comprise the majority or a sufficient percentage of a given district such that there is a greater likelihood that they can elect a candidate who may be racially or ethnically similar to them.” Since 1965, such districts have increasingly grown, particularly since a 1986 U.S. Supreme Court decision (Thornburg v. Gingles, 478 U.S. 30 (1986) that the Justice Department took to mandate their use in the fight against racism. The rise of such districts are essentially why, although there were fewer than five black congressmen in the United States House of Representatives prior to 1965, there are around forty today: a percentage of congress (slightly less than 10%) not much less than the percentage of black people in the American population (slightly more than 10%). But what appears to be a triumph for black people may not be, so statistics may tell us, for all Americans.

That’s because, according to some scholars, the rise in the numbers of black congressional representatives may also have effectively required a decline in the numbers of Democrats in the House: as one such researcher remarked a few years ago, “the growth in the number of majority-minority districts has come at the direct electoral expense of … Democrats.” That might appear, to many, to be paradoxical: aren’t most African-Americans Democrats? So how can more black reps mean fewer Democratic representatives?

The answer however is provided, again perhaps strangely, by the very question itself: in short, by precisely the fact that most (upwards of 90%) black people are Democrats. Concentrating black voters into congressional districts, in other words, also has the effect of concentrating Democratic voters: districts that elect black congressmen and women tend to see returns that are heavily Democratic. What that means, conversely, that these are votes that are not being voted in other districts: as Steven Hill put the point for The Atlantic in 2013, drawing up majority minority districts “had the effect of bleeding minority voters out of all the surrounding districts,” and hence worked to “pack Democratic voters into fewer districts.” In other words, majority minority districts have indeed had the effect of electing more black people to Congress—at the likely cost of electing fewer Democrats. Or to put it another way: of electing more Republicans.

It’s certainly true that some of the foremost supporters of majority minority districts have been Republicans: for example, the Reagan-era Justice Department mentioned above. Or Benjamin L. Ginsberg, who told the New York Times that such districts were “‘much fairer to Republicans, blacks and Hispanics” in 1992—when he was general counsel of the Republican National Committee. But while all of that is so—and there is more to be said about majority minority districts along these lines—these are only indirectly the reasons why Michael Wilbon is right to fear statistical thought.

That’s because what Michael Wilbon ought to be afraid of about statistical science, if he isn’t already, is what happens if somebody—with all of the foregoing about majority minority districts in mind, as well as the fact that Democrats have historically been far more likely to look after the interests of working people—happened to start messing around in a fashion similar to how Isaac Newton did with those lists of ancient kings. Newton, remember, used those old lists of ancient kings to compare them with more recent, verifiable lists of kings: by comparing the two he was able to make assertions about which lists were more or less likely to be the records of real kings. Nowadays, statistical science has advanced over Newton’s time, though at heart the process is the same: the comparison of two or more data sets. Today, through more sophisticated techniques—some invented by Karl Pearson—statisticians can make inferences about, for example, whether the operations recorded in one data set caused what happened in another. Using such techniques, someone today could use the lists of African-American congressmen and women and begin to compare them to other sets of data. And that is the real reason Michael Wilbon should be afraid of statistical thought.

Because what happens when, let’s say, somebody used that data about black congressmen—and compared it to, I don’t know, Thomas Piketty’s mountains of data about economic inequality? Let’s say, specifically, the share of American income captured by the top 0.01% of all wage earners? Here is a graph of African-American members of Congress since 1965:

Chart of African American Members of Congress, 1967-2012
Chart of African American Members of Congress, 1967-2012

And here is, from Piketty’s original data, the share of American income captured etc.:

Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998
Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998

You may wish to peruse the middle 1980s—perhaps coincidentally, right around the time of Thornburg v. Gingles both take a huge jump. Leftists, of course, may complain that this juxtaposition could lead to blaming African-Americans for the economic woes suffered by so many Americans—a result that Wilbon should, rightly, fear. But on the other hand, it could also lead Americans to realize that their political system, in which the number of seats in Congress are so limited that “majority minority districts” have, seemingly paradoxically, resulted in fewer Democrats overall, may not be much less anachronistic than the system that governed Babylon—a result that, Michael Wilbon is apparently not anxious to tell you, might lead to something of benefit to everyone.

Either thought, however, can lead to only one conclusion: when it comes to the moonshot of American politics, maybe Isaac Newton should still—despite the protests of people like Michael Wilbon—be driving.

A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.