Forked

He had already heard that the Roman armies were hemmed in between the two passes at the Caudine Forks, and when his son’s courier asked for his advice he gave it as his opinion that the whole force ought to be at once allowed to depart uninjured. This advice was rejected and the courier was sent back to consult him again. He now advised that they should every one be put to death. On receiving these replies … his son’s first impression was that his father’s mental powers had become impaired through his physical weakness. … [But] he believed that by taking the course he first proposed, which he considered the best, he was establishing a durable peace and friendship with a most powerful people in treating them with such exceptional kindness; by adopting the second he was postponing war for many generations, for it would take that time for Rome to recover her strength painfully and slowly after the loss of two armies.
There was no third course.
Titus LiviusAb Urbe Condita. Book IX 

 

Of course, we want both,” wrote Lee C. Bollinger, the president of Columbia University, in 2012, about whether “diversity in post-secondary schools should be focused on family income rather than racial diversity.” But while many might wish to do both, is that possible? Can the American higher educational system serve two masters? According to Walter Benn Michaels of the University of Illinois at Chicago, Bollinger’s thought that American universities can serve both economic goals and racial justice has been the thought of “every academic” with whom he’s ever discussed the subject—but Michaels, for his part, wonders just how sincere that wish really is. American academia, he says, has spent “twenty years of fighting like a cornered raccoon on behalf of the one and completely ignoring the other”; how much longer, he wonders, before “‘we want both’ sounds hollow not only to the people who hear it but to the people who say it?” Yet what Michaels doesn’t say is just why, as pious as that wish is, it’s a wish that is necessarily doomed to go unfulfilled—something that is possible to see after meeting a fictional bank teller named Linda.

Linda”—the late 1970s creation of two Israeli psychologists, Amos Tversky and Daniel Kahneman—may be the most famous fictional woman in the history of the social sciences, but she began life as a single humble paragraph:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Following that paragraph, there were a series of eight statements describing Linda—but as the biologist Stephen Jay Gould would point out later, “five are a blind, and only three make up the true experiment.” The “true experiment” wouldn’t reveal anything about Linda—but it would reveal a lot about those who met her. “Linda,” in other words, is like Nietzsche’s abyss: she stares back into you.

The three pointed statements of Kahneman and Tversky’s experiment are these: “Linda is active in the feminist movement; Linda is a bank teller; Linda is a bank teller and is active in the feminist movement.” The two psychologists would then ask their test subjects to guess which of the three statements was more likely. Initially, these test subjects were lowly undergraduates, but as Kahneman and Tversky performed and then re-performed the experiment, they gradually upgraded: using graduate students with a strong background in statistics next—and then eventually faculty. Yet, no matter how sophisticated the audience to which they showed this description, what Kahneman and Tversky found was that virtually everyone always thought that the statement “Linda is a bank teller and active in the feminist movement” was more likely than the statement “Linda is a bank teller.” But as only a little thought requires, that is impossible.

I’ll let the journalist Michael Lewis, who recently published a book about the work of the pair of psychologists entitled The Undoing Project: A Friendship That Changed Our Minds, explain the impossibility:

“Linda is a bank teller and is active in the feminist movement” could never be more probable than “Linda is a bank teller.” “Linda is a bank teller and is active in the feminist movement” was just a special case of “Linda is a bank teller.” “Linda is a bank teller” included “Linda is a bank teller and is active in the feminist movement” along with “Linda is a bank teller and likes to walk naked through Serbian forests” and all other bank-telling Lindas. One description was entirely contained by the other.

“Linda is a bank teller and is active in the feminist movement” simply cannot be more likely than “Linda is a bank teller.” As Louis Menand of Harvard observed about the “Linda problem” in The New Yorker in 2005, thinking that “bank teller and feminist” is more likely than the “bank teller” description “requires two things to be true … rather than one.” If the one is true so is the other; that’s why, as Lewis observed in an earlier article on the subject, it’s “logically impossible” to think otherwise. Kahneman and Tversky’s finding is curious enough on its own terms for what it tells us about human cognition, of course, because it exposes a reaction that virtually every human being ever encountering it has made. But what makes it significant in the present context is that it is also the cognitive error Lee C. Bollinger makes in his opinion piece.

“The Linda problem,” as Michael Lewis observed in The Undoing Project, “resembled a Venn diagram of two circles, but with one of the circles wholly contained by the other.” One way to see the point, perhaps, is in relation to prison incarceration. As political scientist Marie Gottschalk of the University of Pennsylvania has observed, although the

African-American incarceration rate of about 2,300 per 100,000 people is clearly off the charts and a shocking figure … [f]ocusing so intently on these racial disparities often obscures the fact that the incarceration rates for other groups in the United States, including whites and Latinos, is also comparatively very high.

While the African-American rate of imprisonment is absurdly high, in other words, the “white incarceration rate in the United States is about 400 per 100,000,” which is at least twice the rate of “the most punitive countries in Western Europe.” What that means is that, while it is possible to do something regarding, say, African-American incarceration rates by lowering the overall incarceration rates, it can’t be done the other way.“Even,” as Gottschalk says, “if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” Releasing more prisoners means fewer minority prisoners, but releasing minority prisoners still means a lot of prisoners.

Which, after all, is precisely the point of the “Linda problem”: just as “bank teller” contains both “bank teller” and any other set of descriptors that could be added to “bank teller,” so too does “prisoner” include any other set of descriptors that could be added to it. Hence, reducing the prison population will necessarily reduce the numbers of minorities in prison—but reducing the numbers of minority prisoners will not do (much) to reduce the number of prisoners. “Minority prisoners” is a circle contained within the circle of “prisoners”—saying you’d like to reduce the numbers of minority prisoners is essentially to say that you don’t want to do anything about prisons.

Hence, when Hillary Clinton asked her audience during the recent presidential campaign “If we broke up the big banks tomorrow … would that end racism?” and “Would that end sexism?”—and then answered her own question by saying, “No,” what she was effectively saying was that she would do nothing about any of those things, racism and sexism included. (Which, given that this was the candidate who asserted that politicians ought to have “both a public and a private position,” is not out of the question.) Wanting “both,” or an alleviation of economic inequality and discrimination—as Lee Bollinger and “every academic” Walter Benn Michaels has ever talked to say they want—is simply the most efficient way of not getting either. As Michaels says, “diversity and antidiscrimination have done and can do [emp. added] nothing whatsoever to mitigate economic inequality.” The sooner that Americans realize that Michaels isn’t kidding—that anti-discrimination, identity politics is not an alternative solution, but in fact no solution—and why he’s right, the sooner that something could be done about America’s actual problems.

Assuming, of course, that’s something anyone really wants.

Advertisements

Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

Beams of Enlightenment

And why thou beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?
Matthew 7:3

 

“Do you know what Pied Piper’s product is?” the CEO of the company, Jack Barker, asks his CTO, Richard, during a scene in HBO’s series Silicon Valley—while two horses do, in the background, what Jack is (metaphorically) doing to Richard in the foreground. Jack is the experienced hand brought in to run the company Richard founded as a young programmer; on the other hand, Richard is so ingenuous that Jack has to explain to him the real point of everything they are doing: “The product isn’t the platform, and the product isn’t your algorithm either, and it’s not even the software. … Pied Piper’s product is its stock. Whatever makes the value of that stock go up, that is what we’re going to make.” With that, the television show effectively dramatizes the case many on the liberal left have been trying to make for decades: that the United States is in trouble because of something called “financialization”—or what Kevin Phillips (author of 1969’s The Emerging Republican Majority) has called, in one of the first uses of the term, “a prolonged split between the divergent real and financial economies.” Yet few on that side of the political aisle have considered how their own arguments about an entirely different subject are, more or less, the same as those powering “financialization”—how, in other words, the argument that has enhanced Wall Street at the expense of Main Street—Eugene Fama’s “efficient market hypothesis”—is precisely the same as the liberal left’s argument against the SAT.

That the United States has turned from an economy largely centered around manufacturing to one that centers on services, especially financial ones, can be measured by such data as the fact that the total fraction of America’s Gross Domestic Product consumed by the financial industry is now, according to economist Thomas Philippon of New York University, “around 9%,” while just more than a century ago it was under two. Most appear to agree that this is a bad thing: “Our economic illness has a name: financialization,” Time magazine columnist Rana Foroohar argues in her Makers and Takers: The Rise of Finance and the Fall of American Business, while Bruce Bartlett, who worked in both the Reagan and George H.W. Bush Administrations (which is to say that he is not exactly the stereotypical lefty), claimed in the New York Times in 2013 that “[f]inancialization is also an important factor in the growth of income inequality.” In a 2007 Bloomberg News article, Lawrence E. Mitchell—a professor of law at George Washington Law School—denounced how “stock market considerations” have come “to trump those that improve the actual workings of a business.” The consensus view appears to be that it is bad for a business to be, as Jack is on Silicon Valley, more concerned with its stock price than on what it actually does.

Still, if it is such a bad idea, why do companies do it? One possible answer might be found in the timing, which seems to have happened some time after the 1960s: as John Bellamy Foster put it in a 2007 Monthly Review article entitled “The Financialization of Capitalism,” the “fundamental issue of a gravitational shift toward finance in capitalism as a whole … has been around since the late 1960s.” Undoubtedly, that turn was conditioned by numerous historical forces, but it’s also true that it was during the 1960s that the “efficient market hypothesis,” pioneered above all by the research of Eugene Fama of the University of Chicago, became the dominant intellectual force in the study of economics and in business schools—the incubators of the corporate leaders of today. And Fama’s argument was—and is—an intellectual cruise missile aimed at the very idea that the value of a company might be separate from its stock price.

As I have discussed previously (“Lions For Lambs”), Eugene Fama’s 1965 paper “The Behavior of Stock Market Prices” demonstrated that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers”—or in other words, that there was no rational way to beat the stock market. Also known as the “efficient market hypothesis,” the idea is largely that—as Fama’s intellectual comrade Burton Malkiel observed in his book, A Random Walk Down Wall Street (which has gone through more than five editions since its first publication in 1973),“the evidence points mainly toward the efficiency of the market in adjusting so rapidly to new information that it is impossible to devise successful trading strategies on the basis of such news announcements.” Translated, that means that it’s essentially impossible to do better than the market by paying close attention to what investors call a company’s “fundamental value.”

Yet, if there’s never a divergence between a company’s real worth and the price of its stock, that means that there’s no other means to measuring a company’s real worth than by its stock. From Fama’s or Malkiel’s perspective, “stock market considerations” simply are “the actual workings of a business.” They argued against the very idea that there even could be such a distinction: that there could be something about a company that is not already reflected in its price.

To a lot of educated people on the liberal-left, of course, such an argument will affirm many of their prejudices: against the evils of usury, and the like. At the same time, however, many of them might be taken aback if it’s pointed out that Eugene Fama’s case against fundamental economic analysis is the same as the case many educators make, when it comes to college admissions, against the SAT. Take, for example, a 1993 argument made in The Atlantic by Stanley Fish, former chairman of the English Department at Duke University and dean of the humanities at the University of Illinois at Chicago.

In “Reverse Racism, or, How the Pot Got to Call the Kettle Black,” the Miltonist argued against noted conservative Dinesh D’Souza’s contention, in 1991’s Illiberal Education, that affirmative-action in college admissions tends “‘to depreciate the importance of merit criteria.’” The evidence that D’Souza used to advance that thesis is, Fish tells us, the “many examples of white or Asian students denied admission to colleges and universities even though their SAT scores were higher than the scores of some others—often African-Americans—who were admitted to the same institution.” But, Fish says, the SAT has been attacked as a means of college admissions for decades.

Fish cites David Owen’s None of the Above: Behind the Myth of Scholastic Aptitude as an example. There, Owen says that the

correlation between SAT scores and college grades … is lower than the correlation between height and weight; in other words, you would have a better chance of predicting a person’s height by looking at his weight than you would of predicting his freshman grades by looking only at his SAT scores.”

As Fish intimates, most educational professionals these days would agree that the only way to judge a student these days is not by SAT, but by GPA—grade point average.

To judge students by grade point average, however, is just what the SAT was designed to avoid: as Nicholas Lemann describes in copious detail in The Big Test: The Secret History of the American Meritocracy, the whole purpose of the SAT was to discover students whose talents couldn’t be discerned by any other method. The premise of the test’s designers, in short, was that students possessed, as Lemann says, “innate abilities”—and that the SAT could suss those abilities out. What the SAT was designed to do, then, was to find those students stuck in, say, some lethargic, claustrophobic small town whose public schools could not, perhaps, do enough for them intellectually and who stagnated as a result—and put those previously-unknown abilities to work in the service of the nation.

Now, as Lemann remarked in an interview with PBS’ Frontline,  James Conant (president of Harvard and chief proponent of the SAT at the time it became prominent in American life, in the early 1950s) “believed that you would look out across America and you would find just out in the middle of nowhere, springing up from the good American soil, these very intelligent, talented people”—if, that is, America adopted the SAT to do the “looking out.” The SAT would enable American universities to find students that grade point averages did not—a premise that, necessarily, entails believing that a student’s worth could be more than (and thus distinguishable from) her GPA. That’s what, after all, “aptitude” means: “potential ability,” not “proven ability.” That’s why Conant sometimes asked those constructing the test, “Are you sure this is a pure aptitude test, pure intelligence? That’s what I want to measure, because that is the way I think we can give poor boys the best chance and take away the advantage of rich boys.” The Educational Testing Service (the company that administered the SAT), in sum, believed that there could be something about a student that was not reflected in her grades.

To use an intellectual’s term, that means that the argument against the SAT is isomorphic with the “efficient market hypothesis.” In biology, two structures are isomorphic with each other if they share a form or structure: a human eye is isomorphic with an insect’s eye because they both take in visual information and transmit it to the brain, even though they have different origins. Hence, as biologist Stephen Jay Gould once remarked, two arguments are isomorphic if they are “structurally similar point for point, even though the subject matter differs.” Just as Eugene Fama argued that a company could not be valued other than by its stock price—which has had the effective consequence of meaning that a company’s product is now not whatever superficial business it is supposedly in, but its stock price—educational professionals have argued that the only way to measure a student’s value is to look at her grades.

Now, does that mean that the “financialization” of the United States’ economy is the fault of the liberal left, instead of the usual conservative suspects? Or, to put it more provocatively, is the rise of the 1% at the expense of the 99% the fault of affirmative action? The short answer, obviously, is that I don’t have the slightest idea. (But then, neither do you.) What it does mean, I think, is that at least some of what’s happened to the United States in the past several decades is due to patterns of thought common to both sides of the American political congregation: most perniciously, in the related notions that all value is always and everywhere visible, or that it takes no time and patience for value to manifest itself—and that at least some of the erosion of those ideas is due to the efforts of those who meant well. Granted, it’s always hardest to admit wrongdoing when not only were your intentions pure, but even the immediate effects were also—but it’s also very much more powerful. The point, anyway, is that if you are trying to persuade, it’s probably best to avoid that other four-lettered word associated with horses.

 

 

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

So Small A Number

How chance the King comes with so small a number?
The Tragedy of King Lear. Act II, Scene 4.

 

Who killed Michael Brown, in Ferguson, Missouri, in 2014? According to the legal record, it was police officer Darren Wilson who, in August of that year, fired twelve bullets at Brown during an altercation in Ferguson’s streets—the last being, said the coroner’s report, likely the fatal one. According to the protesters against the shooting (the protest that evolved into the #BlackLivesMatter movement), the real culprit was the racism of the city’s police department and civil administration; a charge that gained credibility later when questionable emails written, and sent to, city employees became public knowledge. In this account, the racism of Ferguson’s administration itself simply mirrored the racism that is endemic to the United States; Darren Wilson’s thirteenth bullet, in short, was racism. Yet, according to the work of Radley Balko of the Washington Post, among others, the issue that lay behind Brown’s death was not racism, per se, but rather a badly-structured political architecture that fails to consider a basic principle of reality banally familiar to such bastions of sophisticated philosophic thought as Atlantic City casinos and insurance companies: the idea that, in the words of the New Yorker’s Malcolm Gladwell, “the safest and most efficient way to provide [protection]” is “to spread the costs and risks … over the biggest and most diverse group possible.” If that is so, then perhaps it could be said that Brown’s killer was whoever caused Americans to forget that principle—if so, a case could be made that Brown’s killer was a Scottish philosopher who lived more than two centuries ago: the sage of skepticism, David Hume.

Hume is well-known in philosophical circles for, among other contributions, describing something he called the “is-ought problem”: in his early work, A Treatise of Human Nature, Hume said his point was that “the distinction of vice and virtue is not founded merely on the relations of objects”—or, that just because reality is a certain way, that does not mean that it ought to be that way. British philosopher G.E. Moore later called the act of mistaking is with ought the “naturalistic fallacy”: in 1903’s Principia Ethica, Moore asserted (as J.B. Schneewind of Johns Hopkins has paraphrased it) that “claims about morality cannot be derived from statements of facts.” It’s a claim, in other words, that serves to divide questions of morality, or values, from questions of science, or facts—and, as should be self-evident, the work of the humanities requires a intellectual claim of this form in order to exist. If morality, after all, is amenable to scientific analysis there would be little reason for the humanities.

Yet, there is widespread agreement among many intellectuals that the humanities are not subject to scientific analysis, and specifically because only the humanities can tackle subjects of “value.” Thus, for instance, we find professor of literature Michael Bérubé, of Pennsylvania State University—an institution noted for its devotion to truth and transparency—scoffing “as if social justice were a matter of discovering the physical properties of the universe” when faced with doubters like Harvard biologist E. O. Wilson, who has had the temerity to suggest that the humanities could learn something from the sciences. And, Wilson and others aside, even some scientists ascribe to some version of this split: the biologist Stephen Jay Gould, for example, echoed Moore in his essay “Non-Overlapping Magisteria” by claiming that while the “net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory),” the “net of religion”—which I take in this instance as a proxy for the humanities generally—“extends over questions of moral meaning and value.” Other examples could be multiplied.

How this seemingly-arid intellectual argument affected Michael Brown can be directly explained, albeit not easily. Perhaps the simplest route is by reference to the Malcolm Gladwell article I have already cited: the 2006 piece entitled “The Risk Pool.” In a superficial sense, the text is a social history about the particulars of how social insurance and pensions became widespread in the United States following the Second World War, especially in the automobile industry. But in a more inclusive sense, “The Risk Pool” is about what could be considered a kind of scientific law—or, perhaps, a law of the universe—and how, in a very direct sense, that law affects social justice.

In the 1940s, Gladwell tells us, the leader of the United Auto Workers union was Walter Reuther—a man who felt that “risk ought to be broadly collectivized.” Reuther thought that providing health insurance and pensions ought to be a function of government: that way, the largest possible pool of laborers would be paying into a system that could provide for the largest possible pool of recipients. Reuther’s thought, that is, most determinedly centered on issues of “social justice”: the care of the infirm and the aged.

Reuther’s notions however also could be thought of in scientific terms: as an instantiation of what is called, by statisticians, the “law of large numbers.” According to Caltech physicist Leonard Mlodinow, the law of large numbers can be described as “the way results reflect underlying probabilities when we make a large number of observations.” A more colorful way to think of it is the way trader and New York University professor Nassim Taleb puts it in his book, Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets: there, Taleb observes that, were Russian roulette a game in which the survivors gained the savings of the losers, then “if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday—but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).” In general, the law of large numbers is how casinos (or investment banks) make money legally (and bookies make it illegally): by taking enough bets (which thereby cancel each other out) the institution, whether it is located in a corner tavern or Wall Street, can charge customers for the privilege of betting—and never take the risk of failure that would accrue were that institution to bet one side or another. Less concretely, the same law is what allows us to assert confidently a belief in scientific results: because they can be repeated again and again, we can trust that they reflect something real.

Reuther’s argument about social insurance and pensions more or less explicitly mirrors that law: like a casino, the idea of social insurance is that, by including enough people, there will be enough healthy contributors paying into the fund to balance out the sick people drawing from it. In the same fashion, a pension fund works by ensuring that there are enough productive workers paying into the pension to cancel out the aged people receiving from it. In both casinos and pension funds, in other words, the only means by which they can work is by having enough people included in them—if there are too few, the fund or casino takes the risk that the numbers of those drawing out exceed those paying in, at which point the operation fails. (In gambling, this is called “breaking the bank”; Ward Wilson pithily explains why that doesn’t happen very often in his learned tome, Gambling for Winners; Your Hard-Headed, No B.S., Guide to Gaming Opportunities With a Long-Term, Mathematical, Positive Expectation: “the casino has more money than you.”) Both casinos and insurance funds must have large numbers of participants in order to function: as numbers decrease, the risk of failure increases. Reuther therefore thought that the safest possible way to provide social protection for all Americans was to include all Americans.

Yet, according to those following Moore’s concept of the “naturalistic fallacy,” Reuther’s argument would be considered an illicit intrusion of scientific ideas into the realm of politics, or “value.” Again, that might appear to be an abstruse argument between various schools of philosophers, or between varieties of intellectuals, scientific and “humanistic.” (It’s an argument that, in addition to accruing to the humanities the domain of “value,” also cedes categories like stylish writing—as if scientific arguments can only be expressed by equations rather than quality of expression, and as if there weren’t scientists who were brilliant writers and humanist scholars who weren’t awful ones.) But while in one sense this argument takes place in very rarified air, in another it takes place on the streets where we live. Or, more specifically, the streets where Michael Brown was shot and killed.

The problem of Ferguson, Radley Balko’s work for the Washington Post tells us, is not one of “race,” but instead a problem of poor people. More exactly, a problem of what happens when poor people are excluded from larger population pools—or in other words, when the law of large numbers is excluded from discussions of public policy. Balko’s story draws attention to two inarguable facts: the first, that there “are 90 municipalities in St. Louis County”—Ferguson’s county—and nearly all of them “have their own police force, mayor, city manager and town council,” while 81 of those towns also have their municipal court capable of sentencing lawbreakers to paying fines. By contrast, Balko draws attention to the second-largest, by population, Missouri urban county: Kansas City’s Jackson County, which is both “geographically larger than St. Louis County and has about two-thirds the population”—and yet “has just 19 municipalities, and just 15 municipal courts.” Comparing the two counties, that is, implies that St. Louis County is far more segmented than Jackson County is: there are many more population pools in the one than in the other.

Knowing what is known about the law of large numbers then, it might not be surprising that a number of the many municipalities of St. Louis County are worse off than the few municipalities of Jackson County: in St. Louis County some towns, Balko reports, “can derive 40 percent or more of their annual revenue from the petty fines and fees collected by their municipal courts”—rather than, say, property taxes. That, it seems likely, is due to the fact that instead of many property owners paying taxes, there are instead a large number of renters paying rent to a small number of landlords, who in turn are wealthy enough to minimize their tax burden by employing tax lawyers and other maneuvers. Because these towns thusly cannot depend on property tax revenue, they must instead depend on the fines and fees the courts can recoup from residents: an operation that, because of the chaos that necessarily implies for the lives of those citizens, usually results in more poverty. (It’s difficult to apply for a job, for example, if you are in jail due to failure to pay a parking ticket.) Yet, if the law of large numbers is excluded a priori from political discussion—as some in the humanities insist it must be, whether out of disciplinary self-interest or some other reason—that necessarily implies that residents of Ferguson cannot address the real causes of their misery, a fact that may explain just why those addressing the problems of Ferguson focus so much on “racism” rather than the structural issues raised by Balko.

The trouble however with identifying “racism” as an explanation for Michael Brown’s death is that it leads to a set of “solutions” that do not address the underlying issue. In the November following Brown’s death, for example, Trymaine Lee of MSNBC reported that the federal Justice Department “held a two-day training with St. Louis area police on implicit racial bias and fair and impartial policing”—as if the problem of Ferguson was wholly to blame on the police department or even the town administration as a whole. Not long afterwards, the Department of Justice reported (according to Ray Sanchez of CNN) that, while Ferguson is 67% African-American, in the two years prior to Brown’s death “85% of people subject to vehicle stops by Ferguson police were African-American,” while “90% of those who received citations were black and 93% of people arrested were black”—data that seems to imply that, were those numbers only closer to 67%, then there would be no problem in Ferguson.

Yet, even if the people arrested in Ferguson were proportionately black, that would have no effect on the reality that—as Mike Maciag of Governing reported shortly after Brown’s death—“court fine collections [accounted] for one-fifth of [Ferguson’s] total operating revenue” in the years leading up to the shooting.  The problem of Ferguson isn’t that its residents are black, and so that the town’s problems could be solved by, say, firing all the white police officers and hiring all black ones. Instead, Ferguson’s difficulty is not just that the town’s citizens are poor—but that they are politically isolated.

There is, in sum, a fundamental reason that the doctrine of “separate but equal” is not merely bad for American schools, as the Supreme Court held in the 1954 decision of Brown v. Board of Education, the landmark case that ended Jim Crow in the American South. That reason is the same at all scales: from the nuclear supercollider at CERN exploring the essential particles of the universe to the roulette tables of Las Vegas to the Social Security Administration, the greater the number of inputs the greater the certainty, and hence safety, of the results. Instead of affirming that law of the universe, however, the work of people like Michael Bérubé and others is devoted to questioning whether universal laws exist—in other words, to resisting the encroachment of the sciences on their turf. Perhaps that resistance is somehow helpful in some larger sense; perhaps it is so that, as is often claimed, the humanities enlarge our sense of what it means to be human, among other sometimes-described possible benefits—I make no claims on that score.

What’s absurd, however, is the monopolistic claim sometimes retailed by Bérubé and others that the humanities have an exclusive right to political judgment: if Michael Brown’s death demonstrates anything, it ought (a word I use without apology) to show that, by promoting the idea of the humanities as distinct from the sciences, humanities departments have in fact collaborated (another word I use without apology) with people who have a distinct interest in promoting division and discord for their own ends. That doesn’t mean, of course, that anyone who has ever read a novel or seen a film helped to kill Michael Brown. But, just as it is so that institutions that cover up child abuse—like the Catholic Church or certain institutions of higher learning in Pennsylvania—bear a responsibility to their victims, so too is there a danger in thinking that the humanities have a monopoly on politics. Darren Wilson did have a thirteenth bullet, though it wasn’t racism. Who killed Michael Brown? Why, if you think that morality should be divided from facts … you did.

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.

 

 

 

 

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.