Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Caterpillars

All scholars, lawyers, courtiers, gentlemen,
They call false caterpillars and intend their death.
2 Henry VI 

 

When Company A, 27th Armored Infantry Battalion, U.S. 1st Infantry Division (“the Big Red One”), reached the forested hills overlooking the Rhine in the early afternoon of 7 March, 1945, and found the Ludendorff Bridge still, improbably, standing, they may have been surprised to find that they had not only found the last passage beyond Hitler’s Westwall into the heart of Germany—but also stumbled into a controversy that is still, seventy years on, continuing. That controversy could be represented by an essay written some years ago by the Belgian political theorist Chantal Mouffe on the American philosopher Richard Rorty: the problem with Rorty’s work, Chantal claimed, was that he believed that the “enemies of human happiness are greed, sloth, and hypocrisy, and no deep analysis is required to understand how they could be eliminated.” Such beliefs are capital charges in intellectual-land, where the stock-in-trade is precisely the kind of “deep analysis” that Rorty thought (at least according to Mouffe) unnecessary, so it’s little wonder that, for the most part, it’s Mouffe who’s had the better part of this argument—especially considering Rorty has been dead since 2007. Yet as the men of Company A might have told Mouffe—whose work is known, according to her Wikipedia article, for her “use of the work of Carl Schmitt” (a legal philosopher who joined the Nazi Party on 1 May 1933)—it’s actually Rorty’s work that explains just why they came to the German frontier; an account whose only significance lies in the fact that it may be the ascendance of Mouffe’s view over Rorty’s that explains such things as, for instance, why no one was arrested after the financial crisis of 2007-08.

That may, of course, sound like something of a stretch: what could the squalid affairs that nearly led to the crash of the world financial system have in common with such recondite matters as the dark duels conducted at academic conferences—or a lucky accident in the fog of war? But the link in fact is precisely at the Ludendorff, sometimes called “the Bridge at Remagen”—a bridge that might not have been standing for Company A to find had the Nazi state really been the complicated ideological product described by people like Mouffe, instead of the product of “ruthless gangsters, distinguishable only by their facial hair” (as Rorty, following Vladimar Nabokov, once described Lenin, Trotsky, and Stalin). That’s because, according to (relatively) recent historical work that unfortunately has not yet deeply penetrated the English-speaking world, in March 1945 the German generals who had led the Blitzkrieg in 1940 and ’41—and then headed the defense of Hitler’s criminal empire—were far more concerned with the routing numbers of their bank accounts than the routes into Germany.

As “the ring closed around Germany in February, March, and April 1945”—wrote Ohio State historian Norman Goda in 2003—“and as thousands of troops were being shot for desertion,” certain high-ranking officers who, in some cases, had been receiving extra “monthly payments” directly from the German treasury on orders of Hitler himself “deposited into banks that were located in the immediate path of the enemy quickly arranged to have their deposits shifted to accounts in what they hoped would be in safer locales.” In other words, in the face of Allied advance, Hitler’s generals—men like Heinz Guderian, who in 1943 was awarded “Deipenhof, an estate of 937 hectares (2,313 acres) worth RM [Reichmark] 1.24 million” deep inside occupied Poland—were preoccupied with defending their money, not Germany.

Guderian—who led the tanks who broke the French lines at Sedan, the direct cause of the Fall of France in May 1940—was only one of many top-level military leaders who received secretive pay-offs even before the beginning of World War II: Walther von Brauchitsch, who was Guderian’s supervisor, had for example been getting—tax-free—double his salary since 1938, while Field Marshal Erhard Milch, who quit his prewar job of running Lufthansa to join the Luftwaffe, received a birthday “gift” from Hitler each year worth more than $100,000 U.S. Both of these were just two of many high military officers to receive such six-figure “birthday gifts,” or other payments, which Goda writes not only were “secret and dependent on behavior”—that is, on not telling anyone about the payments and on submission to Hitler’s will—but also “simply too substantial to have been viewed seriously as legitimate.” All of these characteristics, as any federal prosecutor will tell you, are hallmarks of corruption.

Such corruption, of course, was not limited to the military: the Nazis were, according to historian Jonathan Petropoulos “not only the most notorious murderers in history but also the greatest thieves.” Or as historian Richard J. Evans has noted, “Hitler’s rule [was] based not just on dictatorship, but also on plunder, theft and looting,” beginning with the “systematic confiscation of Jewish assets, beginning almost immediately on the Nazi seizure of power in 1933.” That looting expanded once the war began; at the end of September, 1939 for instance, Evans reports, the German government “decreed a blanket confiscation of Polish property.” Dutch historian Gerard Aalders has estimated that Nazi rule stole “the equivalent of 14 billion guilders in today’s money in Jewish-owned assets alone” from the Netherlands. In addition, Hitler and other Nazi leaders, like Herman Göring, were also known for stealing priceless artworks in conquered nations (the subject of the recent film, Monument Men). In the context of such thievery on such a grand scale it hardly appears a stretch to think they might pay off the military men who made it all possible. After all, the Nazis had been doing the same for civilian leaders virtually since the moment they took over the state apparatus in 1933.

Yet, there is one difference between the military leaders of the Third Reich and American leaders today—a difference perhaps revealed by their response when confronted after the war with the evidence of their plunder. At the “High Command Trial” at Nuremberg in the winter of 1947-’48, Walther von Brauchitsch and his colleague Franz Halder—who together led the Heer into France in 1940—denied that they ever took payments, even after confronted with clear evidence of just that. Milch, for instance, claimed that his “birthday present” was compensation for the loss of his Lufthansa job. All the other generals did the same: Goda notes that even Guderian, who was well-known for his Polish estate, “changed the dates and circumstances of the transfer in order to pretend that the estate was a legitimate retirement gift.” In short, they all denied it—which is interesting in the light of the fact that, as happened during the first Nuremberg trial on 3 January 1946, at one point a witness could casually admit to the murder of 90,000 people.

To admit receiving payments, in other words, was worse—to the generals—than admitting setting Europe alight for essentially no reason. That it was so is revealed by the fact that the legal silence was matched by similar silences in postwar memoirs and so on, none of which (except Guderian’s, who as mentioned fudged some details in his) ever admitted taking money directly from the national till. That silence implies, in the first place, a conscious knowledge that these payments were simply too large to be legitimate. And that, in turn, implies a consciousness not merely of guilt, but also of shame—a concept that is simply incoherent without an understanding of what the act underlying the payments actually is. The silence of the generals, that is, implies that the German generals had internalized a definition of corruption—unfortunately, however, a recent U.S. Supreme Court case, McDowell v. United States, suggests that Americans (or at least the Supreme Court) have no such definition.

The facts of the case were that Robert McDonnell, then governor of Virginia, received $175,000 in benefits from the executive officer of a company called Star Scientific, presumably because not only did Star Scientific want Virginia’s public universities to conduct research on their product, a “nutritional supplement” based on tobacco—but they felt McDonnell could conjure the studies. The burden of the prosecutor—according to Chief Justice John Roberts’ unanimous opinion—was then to show “that Governor McDonnell committed (or agreed to commit) an ‘official act’ in exchange for the loans and gifts.” At that point, then, the case turned on the definition of “official act.”

According to the federal bribery statute, an “official act” is

any decision or action on any question, matter, cause, suit, proceeding or controversy, which may at any time be pending, or which may by law be brought before any public official, in such official’s official capacity, or in such official’s place of trust or profit.

McDonnell, of course, held that the actions McDonnell admitted he took on Star Scientific’s behalf—including setting up meetings with other state officials, making phone calls, and hosting events—did not constitute an “official act” under the law. The federal prosecutors, obviously to the contrary, held they did.

To McDonnell, defining the acts he took on behalf of Star Scientific constituted a too-broad definition of “official act”: to him (or rather his attorneys), the government’s definition made “‘virtually all of a public servant’s activities ‘official,’ no matter how minor or innocuous.’” The prosecutors’ argued that a broad definition of crooked acts is necessary to combat corruption; McDonnell argued that the broad definition threatens the ability of public officials to act. Ultimately, his attorneys said, the broad nature of the anti-corruption statute threatens constitutional government itself.

In the end the Court accepted that argument. In John Roberts’ words, the acts McDonnell committed could not be defined as anything “more specific and focused than a broad policy objective.” In other words, sure McDonnell got a bunch of stuff from a constituent, and then he did a bunch of things for that constituent, but the things that he did did not constitute anything more than simply doing his job—a familiar defense, to be sure, at Nuremberg.

The effective upshot of McDowell, then, appears to be that the U.S. Supreme Court, at least, no longer has an adequate definition of corruption—which might appear to be a grandiose conclusion to hang on one court case, of course. But consider the response of Preet Bharara, former United States Attorney for the Southern District of New York, when he was asked by The New Yorker just why it was that his office did not prosecute anyone—anyone—in response to the financial meltdown of 2007-08. Sometimes, Bharara said in response, when “you see a building go up in flames, you have to wonder if there’s arsonyou see a building go up in flames, you have to wonder if there’s arson.” Sometimes, he continued, “it’s not arson, it’s an accident”—but sometimes “it is arson, and you can’t prove it.” Bharara’s comments suggested that the problem was an investigatory one: his detectives could not gather the right evidence. But McDonnell suggests that the problem may have been something else: a legal one, where the problem isn’t with the evidence but rather with the conceptual category required to use the evidence to prosecute a crime.

That something is going on is revealed by a report from Syracuse University’s Transactional Records Access Clearinghouse, or TRAC, which found that in 2011 the Department of Justice reported that prosecutions for financial crimes have been falling since the early 1990s—despite the fact that the economic crisis of 2007 and 2008 was driven by extremely questionable financial transactions. Other studies observe that Ronald Reagan, generally not thought to be a crusader type, prosecuted more financial crimes than did Barack Obama: in 2010, the Obama administration deported 393,000 immigrants—and prosecuted zero bankers.

The question, of course, is why that is so—to which any number of answers have been proposed. One, however, is especially resisted by those at the upper reaches of academia who are in the position of educating future federal prosecutors: people who, like Mouffe, think that

Democratic action … does not require a theory of truth and notions like unconditionality and universal validity but rather a variety of practices and pragmatic moves aimed at persuading people to broaden their commitments to others, to build a more inclusive community.

“Liberal democratic principles,” Mouffe goes on to claim, “can only be defended in a contextualist manner, as being constitutive of our form of life, and we should not try to ground our commitment to them on something supposedly safer”—that “something safer” being, I suppose, anything like the account ledgers of the German treasury from 1933 to 1945, which revealed the extent of Nazi corruption after the war.

To suggest, however, that there is a connection between the linguistic practices of professors and the failures of prosecutors is, of course, to engage in just the same style of argumentation as those who insist, with Mouffe, that it is “the mobilization of passions and sentiments, the multiplication of practices, institutions and language games that provide the conditions of possibility for democratic subjects and democratic forms of willing” that will lead to “the creation of a democratic ethos.” Among these are, for example, literary scholar Jane Tompkins, who once made a similar point by recommending, not “specific alterations in the current political and economic arrangements,” but instead “a change of heart.” But perhaps the rise of such a species of supposed “leftism” ought to be expected in an age characterized by vast economic inequality, which according to Nobel Prize-winning economist Joseph Stiglitz (a proud son of Gary, Indiana), “is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever.”`The only question left, one supposes, is what else has been bought; the state of academia these days, it appears, suggests that academics can’t even see the Rhine, much less point the way to a bridge across.

No Hurry

The man who is not in a hurry will always see his way clearly; haste blunders on blindly.
—Titus Livius (Livy). Ab Urbe Condita. (From the Foundation of the City.) Book 22.

Just inland from the Adriatic coast, northwest of Bari, lies the little village of Canne. In Italian, the name means “reeds”; a non-descript name for a non-descript town. But the name has outlived at least one language, and will likely outlive another, all due to one August day more than 2000 years ago, where two ways of thinking collided; the conversation marked by that day has continued until now, and likely will outlive us all. One line of that conversation was taken up recently by a magazine likely as obscure as the village to most readers: Parameters, the quarterly publication of the U.S. Army War College. The article that continues the conversation whose earliest landmark may be found near the little river of  Ofanto is entitled “Intellectual Capital: A Case for Cultural Change,” and the argument of the piece’s three co-authors—all professors at West Point—is that “recent US Army promotion and command boards may actually penalize officers for their conceptual ability.” It’s a charge that, if true, ought first to scare the hell out of Americans (and everyone else on the planet), because it means that the single most fearsome power on earth is more or less deliberately being handed over to morons. But it ought, second, to scare the hell out of people because it suggests that the lesson first taught at the sleepy Italian town has still not been learned—a lesson suggested by two words I withheld from the professors’ charge sheet.

Those words? “Statistical evidence”: as in, “statistical evidence shows that recent US Army promotion and command boards …” What the statistical evidence marshaled by the West Pointers shows, it seems, is that

officers with one-standard-deviation higher cognitive abilities had 29 percent, 18 percent, and 32 percent lower odds, respectively, of being selected early … to major, early to lieutenant colonel, and for battalion command than their one-standard-deviation lower cognitive ability peers.

(A “standard deviation,” for those who don’t know—and the fact that you don’t is part of the story being told here—is a measure of how far from the mean, or average, a given set of data tends to spread: a low standard means that the data tends to cluster pretty tightly, like a river in mountainous terrain, whereas a high measure means that the data spreads widely, like river’s delta.) The study controlled for gender, ethnicity, year group, athleticism, months deployed, military branch, geographic region, and cumulative scores as cadets—and found that “if two candidates for early promotion or command have the same motivation, ethnicity, gender, length of Army experience, time deployed, physical ability, and branch, and both cannot be selected, the board is more likely to select the officer with the lower conceptual ability.” In other words, in the Army, the smarter you are, the less likely you are to advance quickly—which, obviously, may affect just how far you are likely to go at all.

That may be so, you might say, but maybe it’s just that smarter people aren’t very “devoted,” or “loyal” (or whatever sort of adjective one prefers), at least according to the military. This dichotomy even has a name in such circles: “Athens” vs. “Sparta.” According to the article, “Athens represents an institutional preference for intellectual ability, critical thinking, education, etc.,” while conversely “Sparta represents an institutional preference for motivation, tactical-ability, action-bias, diligence, intensity, physicality, etc.” So maybe the military may not be promoting as many “Athenians” as “Spartans”—but maybe the military is a more “Spartan” organization than others. Maybe this study is just a bunch of Athenians whining about not being able to control every aspect of life.

Yet, if thought about, that’s a pretty weird way to conceptualize things: why should “Athens” be opposed to “Sparta” at all? In other words, why should it happen that the traits these names attempt to describe are distributed in zero-sum packages? Why should it be that people with “Spartan” traits should not also possess “Athenian” traits, and vice versa? The whole world supposedly divides along just these lines—but I think any of us knows someone who is neither of these, and if so then it seems absurd to think that possessing a “Spartan” trait implies a lack of a corresponding “Athenian” one. As the three career Army officers say, “motivation levels and cognitive ability levels are independent of each other.” Just because someone is intelligent does not mean they are likely to be unmotivated; indeed, it makes more sense to think just the opposite.

Yet, apparently, the upper levels of the U.S. military think differently: they seem to believe that devotion to duty precludes intelligence, and vice versa. We know this not because of stereotypes about military officials, but instead because of real data about how the military allocates its promotions. In their study, the three career Army officers report that they

found significant evidence that regardless of what motivation/diligence category officers were in (low, medium, or high) there was a lower likelihood the Army would select the officers for early promotion or battalion command the higher their cognitive ability, despite the fact that the promotion and selection boards had no direct information indicating each officer’s cognitive ability. (Emp. added).

This latter point is so significant that I highlight it: it demonstrates that the Army is—somehow—selecting against intelligence even when it, supposedly, doesn’t know whether a particular candidate has it or not. Nonetheless, the boards are apparently able to suss it out (which itself is a pretty interesting use of intelligence) in order to squash it, and not only that, squash it no matter how devoted a given officer might be. In sum, these boards are not selecting against intelligence because they are selecting for devotion, or whatever, but instead are just actively attempting to promote less-intelligent officers.

Now, it may then be replied, that may be so—but perhaps fighting wars is not similar to doing other types of jobs. Or as the study puts it: perhaps “officers with higher intellectual abilities may actually make worse junior officers than their average peers.” If so, as the three career Army officers point out, such a situation “would be diametrically opposed to the … academic literature” on leadership, which finds a direct relationship between cognitive ability and success. Even so, however, perhaps war is different: the “commander of a top-tier special operations selection team,” the three officers say, reported that his team rejected candidates who scored too high on a cognitive ability test, on the grounds that such candidates “‘take too long to make a decision’”—despite the fact that, as the three officers point out, “research has shown that brighter people come up with alternatives faster than their average-conceptual-level peers.” Thinking that intelligence inhibits action, in other words, would make war essentially different from virtually every other human activity.

Of course, had that commander been in charge of recruitment during the U.S. Civil War, that would have meant not employing an alcoholic, cashiered (fired) former lieutenant later denounced as “an unimaginative butcher in war and a corrupt, blundering drunkard in peace,” a man who failed in all the civilian jobs he undertook, as a farmer and even a simple store clerk, and came close to bankruptcy several times over the course of his life. That man was Ulysses S. Grant—the man about whom Abraham Lincoln would say, when his critics pointed to his poor record, “I cannot spare this man; he fights!” (In other words, he did not hesitate to act.) Grant would, as is known, eventually accept his adversary, Lee’s, surrender at Appomattox Court House; hence, a policy that runs the risk of not finding Grant in time appears, at best, pretty cavalier.

Or, as the three career Army officers write, “if an organization assumes an officer cannot be both an Athenian and a Spartan, and prefers Spartans, any sign of Athenians will be discouraged,” and so therefore “when the Army needs senior officers who are Athenians, there will be only Spartans remaining.” The opposite view somehow thinks that smart people will still be around when they are needed—but when they are needed, they are really needed. Essentially, this view is more or less to say that the Army should not worry about its ammunition supply, because if something ever happened to require a lot of ammunition the Army could just go get more. Never mind the fact that, at such a moment, everyone else is probably going to want some ammunition too. It’s a pretty odd method of thinking that treats physical objects as more important than the people who use them—after all, as we know, guns don’t kill people, people do.

Still, the really significant thing about Grant is not he himself, but rather that he represented a particular method of thinking: “I propose to fight it out on this line, if it takes all summer,” Grant wrote to Abraham Lincoln in May 1864; “Hold on with a bulldog grip, and chew and choke as much as possible,” Lincoln replied to Grant a few months later. Although Grant is, as above, sometimes called a “butcher” who won the Civil War simply by firing more bodies at the Confederacy than the Southerners could shoot, he clearly wasn’t the idiot certain historians have made him out to be: the “‘one striking feature about Grant’s [written] orders,’” as another general would observe later, was that no “‘matter how hurriedly he may write them in the field, no one ever had the slightest doubt as to their meaning, or even has to read them over a second time to understand them.’” Rather than being unintelligent, Grant had a particular way of thinking: as one historian has observed, “Grant regard[ed] his plans as tests,” so that Grant would “have already considered other options if something doesn’t work out.” Grant had a certain philosophy, a method of both thinking and doing things—which he more or less thinks of as the same thing. But Grant did not invent that method of thinking. It  was already old when a certain Roman Senator conceived of a single sentence that, more or less, captured Grant’s philosophy—a sentence that, in turn, referred to a certain village near the Adriatic coast.

The road to that village is, however, a long one; even now we are just more than halfway there. The next step taken upon it was by a man named Quintus Fabius Maximus Verrocusus—another late bloomer, much like Grant. According to Plutarch, whose Parallel Lives sought to compare the biographies of famous Greeks and Romans, as a child Fabius was known for his “slowness in speaking, his long labour and pains in learning, his deliberation in entering into the sports of other children, [and] his easy submission to everybody, as if he had no will of his own,” traits that led many to “esteem him insensible and stupid.” Yet, as he was educated he learned to make his public speeches—required of young aristocratic Romans—without “much of popular ornament, nor empty artifice,” and instead with a “great weight of sense.” And also like Grant, who in the last year of the war faced a brilliant opponent general in Robert E. Lee, Fabius would eventually face an ingenious military leader who desired nothing more than to meet his adversary in battle—where that astute mind could destroy the Roman army in a single day, and, so possibly win the freedom of his nation.

That adversary was Hannibal Barca, the man who had marched his army, including his African war elephants, across the Alps into Italy. Hannibal was a Carthaginian, a Phoenician city on the North African coast that had already fought one massive war with Rome (the First Punic War) and had now, through Hannibal’s invasion, embarked on a second. Carthage was about as rich and powerful as Rome was itself, so by invading Hannibal posed a mortal threat to the Italians—not least because Hannibal had quite a reputation as a general already. Hence Fabius, who by this time had himself been selected to oppose the invader, “deemed it best not to meet in the field a general whose army had been tried in many encounters, and whose object was a battle,” and instead attempted to “let the force and vigour of Hannibal waste away and expire, like a flame, for want of fuel,” as Plutarch put the point. Instead of attempting to meet Hannibal in a single battle, where the African might out-general him, Fabius attempted to wear him—an invader far from his home base—down.

For some time things continued like this: Hannibal ranged about Italy, attempting to provoke Fabius into battle, while the Roman followed meekly at a distance; according to his enemies, as if he were Hannibal’s servant. Meanwhile, according to Plutarch, Hannibal himself sought to encourage that idea: burning the countryside around Rome, the Carthaginian made sure to post armed guards around Fabius’ estates in order to suggest that the Roman was in his pay. Eventually, these stratagems had their effect, and after a further series of misadventures, Fabius retired from command—just the event Hannibal awaited.

The man who became commander after Fabius was Varro, and it was he who led the Romans to the small village near the Adriatic coast. What happened near that village more than 2000 years ago might be summed by an image that might familiar to viewers of the television show, Game of Thrones:

Battle-850x560

On the television show the chaotic mass in the middle is the tiny army of the character Jon Snow, whereas the orderly lines about the perimeter is the much-vaster army of Ramsay Bolton. But in historical reality, the force in the center that is being surrounded by the opposing force was actually the larger of the two—the Roman army. It was the smaller of the two armies, the Carthaginian one, that stood at the periphery. Yet, somehow, the outcome was more or less the same: the mass of soldiers on the outside of that circle destroyed the force of soldiers on the inside, despite there being more of them; a fact that was so surprising that not only is it still remembered, but it was also the subject of not one, but two remarks that are also still remembered today.

The first of these is a remark made just before the battle itself—a remark that came in reply to the comment of one of Hannibal’s lieutenants, an officer named Gisgo, on the disparity in size between the two armies. The intent of Gisgo’s remark was, it would seem, something to the effect of, “you’re sure this is going to work, right?” To which Hannibal replied: “another thing that has escaped your notice, Gisgo, is even more amazing—that although there are so many of them, there is not one among them called Gisgo.” That is to say, Gisgo is a unique individual, and so the numbers do not matter … etc., etc. We can all fill in the arguments from there: the power of the individual, the singular force of human creativity, and so on. In the case of the incident outside Cannae, those platitudes happened to be true—Hannibal really was a kind of tactical genius. But he also happened not to be facing Fabius that day.

Fabius himself was not the sort of person who could sum up his thought in a pithy (and trite) remark, but I think that the germ of his idea was distilled some centuries after the battle by another Roman senator. “Did all the Romans who fell at Cannae”—the ancient name for the village now known as Canne—“have the same horoscope?” asked Marcus Cicero, in a book entitled De Divinatione. The comment is meant as a deflationary pinprick, designed to explode the pretensions of the followers of Hannibal—a point revealed by a subsequent sentence: “Was there ever a day when countless numbers were not born?” The comment’s point, in other words, is much the same Cicero made in another of his works, when he tells a story about the atheistic philosopher Diagoras. Reproaching his atheism, a worshipper directed Diagoras to the many painted tablets in praise of the gods at the local temple—tablets produced by storm survivors who had taken a vow to have such a tablet painted while enveloped by the sea’s power. Diagoras replied, according to Cicero, that is merely so “because there are no pictures anywhere of those who have been shipwrecked.” In other words: check your premises, sportsfans: what you think may be the result of “creativity,” or some other malarky, may simply be due to the actions of chance—in the case of Hannibal, the fact that he happened not to be fighting Fabius.

Or, more specifically, to a statistical concept called the Law of Large Numbers. First explicitly described by the mathematician Jacob Bernoulli in 1713, this is the law that holds—in Bernoulli’s words—that “it is not enough to take one or another observation for […] reasoning about an event, but that a large number of them are needed.” In a crude way, this law is what critics of Grant refer to when they accuse him of being a “butcher”: that he simply applied the larger numbers of men and material available to the Union side to the war effort. It’s also what the enemies of the man who ought to have been on the field at Cannae—but wasn’t—said about him also: that Fabius fought what military strategists call a “war of attrition” rather than a “war of maneuver.” At that time, and since, many turn their nose up at such methods: in ancient times, they were thought to be ignoble, unworthy—which was why Varro insisted on rejecting what he might have called an “old man strategy” and went on the attack that August day. Yet, they were precisely the means by which, two millennia apart, two very similar men saved their countries from very similar threats.

Today, of course, very many people on the American “Left” say that what they call “scientific” and “mathematical” thought is the enemy. On the steps of the University of California’s Sproul Hall, more than fifty years ago, the Free Speech Movement’s Mario Savio denounced “the operation of the machine”; some years prior to that German Marxist Theodore Adorno and his co-worker Max Horkheimer had condemned the spread of such thought as, more or less, the pre-condition necessary for the Holocaust: “To the Enlightenment,” the two sociologists wrote, “that which does not reduce to numbers, and ultimately to the one, becomes illusion.” According to Bruce Robbins of Columbia University, “the critique of Enlightenment rationality is what English departments were founded on,” while it’s also been observed that, since the 1960s, “language, symbolism, text, and meaning came to be seen as the theoretical foundation for the humanities.” But as I have attempted to show, the notions conceived of by these writers as belonging to a particular part of the Eurasian landmass at a particular moment of history may not be so particular after all.

Leaving those large-scale considerations aside, however, returns us to the discussion concerning promotions in the U.S. military—where the assertions of the three career officers apparently cannot be allowed to go unchallenged. A reply to the three career officers’ article from a Parameters editorial board member, predictably enough, takes them to task for not recognizing that “there are multiple kinds of intelligence,” and instead suggesting that there is “only one particular type of intelligence”—you know, just the same smear used by Adorno and Horkheimer. The author of that article, Anna Simons (a professor at the U.S. Naval Postgraduate School), further intimates that the three officers do not possess “a healthy respect for variation”—i.e., “diversity.” Which, finally, brings us to the point of all this: what is really happening within the military is that, in order to promote what is called “diversity,” standards have to be amended in such a fashion as not only to include women and minorities, but also dumb people.

In other words, the social cost of what is known as “inclusiveness” is simultaneously a general “dumbing-down” of the military: promoting women and minorities also means rewarding not-intelligent people—and, because statistically speaking there simply are more dumb people than not, that also means suppressing smart people who are like Grant, or Fabius. It never appears to occur to anyone that, more or less, talking about “variation” and the like is what the enemies of Grant—or, further back, the enemies of Fabius—said also. But, one supposes, that’s just how it goes in the United States today: neither Grant nor Fabius were called to service until their countrymen had been scared pretty badly. It may be, in other words, that the American military will continue to suppress people with high cognitive abilities within their ranks—apparently, 9/11 and its consequences were not enough like the battle fought near the tiny Italian village to change American views on these matters. Statistically speaking, after all, 9/11 only killed 0.001% of the U.S. population, whereas Cannae killed perhaps a third of the members of the Roman Senate. That, in turn, raises the central question: If 9/11 was not enough to convince Americans that something isn’t right, well—

What will?

 

Paper Moon

Say, it’s only a paper moon
Sailing over a cardboard sea
But it wouldn’t be make-believe
If you believed in me
—“It’s Only A Paper Moon” (1933).

 

As all of us sublunaries knows, we now live in a technological age where high-level training is required for anyone who prefers not to deal methamphetamine out of their trailer—or at least, that’s the story we are fed. Anyway, in my own case the urge towards higher training has manifested in a return to school; hence my absence from this blog. Yet, while even I recognize this imperative, the drive toward scientific excellence is not accepted everywhere: as longer-term readers may know, last year Michael Wilbon of ESPN wrote a screed (“Mission Impossible: African-Americans and Analytics”) not only against the importation of what is known as “analytics” into sports—where he joined arms with nearly every old white guy sportswriter everywhere—but, more curiously, essentially claimed that the statistical analysis of sports was racist. “Analytics” seem, Wilbon said, “to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” But while Wilbon may be dismissed because “analytics” is obviously friendlier to black people than many other forms of thought—it seems patently clear that something that pays more attention to actual production than to whether an athlete has a “good face” (as detailed in Moneyball) is going to be, on the whole, less racist—he isn’t entirely mistaken. Even if Wilbon appears, moronically, to think that his “enemy” is just a bunch of statheads arguing about where to put your pitcher in the lineup, or whether two-point jump shots are valuable, he can be taken seriously if he recognizes that his true opponent is none other than—Sir Isaac Newton.

Although not many realize it, Isaac Newton was not simply the model of genius familiar to us today as the maker of scientific laws and victim of falling apples. (A story he may simply have made up in order to fend off annoying idiots—a feeling with which, if you are reading this, you may be familiar.) Newton did, of course, first conjure the laws of motion that, on Boxing Day 1968, led William Anders, aboard Apollo 8, to reply “I think Isaac Newton is doing … the driving now” to a ground controller’s son who asked who was in charge of the capsule—but despite the immensity of his scientific achievements, those were not the driving (ahem) force of his curiosity. Newton’s main interests, as a devout Christian, were instead about ecclesiastical history—a topic that led him to perhaps the earliest piece of “analytics” ever written: an 87,000-word monstrosity the great physicist published in 1728.

Within the pages of this book is one of the earliest statistical studies ever written—or so at least Karl Pearson, called “the founder of modern statistics,” realized some two centuries later. Pearson started the world’s first statistics department in 1911, at the University College London; he either inaugurated or greatly expanded some half-dozen entire scientific disciplines, from meteorology to genetics. When Albert Einstein was a young graduate student, the first book his study group studied was a work of Pearson’s. In other words, while perhaps not a genius on the order of his predecessor Newton or his successor Einstein, Pearson was prepared to recognize a mind that was. More signifcantly, Pearson understood that, as he later wrote in the essay that furnishes the occasion for this one, “it is unusual for a great man even in old age to write absolutely idle things”: when someone immensely intelligent does something, it may not be nonsense no matter how much it might look it.

That’s what led Pearson, in 1928, to publish the short essay of interest here, which is about what could appear like the ravings of a religious madman, but as Pearson saw, weren’t: Newton’s 1728 The Chronology of Ancient Kingdoms amended, to which is prefixed: A Short Chronicle from the First Memory of Things in Europe to the Conquest of Persia by Alexander the Great. As Pearson understood, it’s a work of apparent madness that conceals depths of genius. But it’s also, as Wilbon might recognize (were he informed enough to realize it) it’s a work that is both a loaded gun pointed at African-Americans—and also, perhaps, a very tool of liberation.

The purpose of the section of the Chronology that concerned Pearson—there are others—was what Pearson called “a scientific study of chronology”: that is, Newton attempted to reconstruct the reigns of various kings, from contemporary France and England to the ancient rulers of “the Egyptians, Greeks and Latins” to the kings of Israel and Babylon. By consulting ancient histories, the English physicist compiled lists of various reigns in kingdoms around the world—and what he found, Pearson tells us, is that “18 to 20 years is the general average period for a reign.” But why is this, which might appear to be utterly recondite, something valuable to know? Well, because Newton is suggesting that by using this list and average, we can compare it to any other list of kings we find—and thereby determine whether the new list is likely to be spurious or not. The greater the difference between the new list of kingly reigns and Newton’s calculations of old lists, in short, the more likely it is that the new list is simply made up, or fanciful.

Newton did his study because he wanted to show that biblical history was not simply mythology, like that of the ancient Greeks: he wanted to show that the list of the kings of Israel exhibited all the same signs as the lists of kings we know to have really existed. Newton thereby sought to demonstrate the literal truth of the Bible. Now, that’s not something, as Pearson knew, that anyone today is likely much to care about—but what is significant about Newton’s work, as Pearson also knew, is that what Newton here realized was that it’s possible to use numbers to demonstrate something about reality, which was not something that had ever really been done before in quite this same way. Within Newton’s seeming absurdity, in sum, there lurked a powerful sense—the very same sense Bill James and others have been able to apply to baseball and other sports over the past generation and more, with the result that, for example, the Chicago Cubs (managed by Theo Epstein, Bill James’ acolyte) last year finally won, for the first time in more than a century, the final game of the season. In other words, during that nocturnal November moonshot on Chicago’s North Side last year, Sir Isaac Newton was driving.

With that example in mind, however, it might be difficult to see just why a technique, or method of thinking, that allows a historic underdog finally to triumph over its adversaries after eons of oppression could be a threat to African-Americans, as Michael Wilbon fears. After all, like the House of Israel, neither black people nor Cubs fans are unfamiliar with the travails of wandering for generations in the wilderness—and so a method that promises, and has delivered, a sure road to Jerusalem might seem to be attractive, not a source of anxiety. Yet, while in that sense Wilbon’s plea might seem obscure, even the oddest ravings of a great man can reward study.

Wilbon is right to fear statistical science, that is, for a reason that I have been exploring recently: of all things, the Voting Rights Act of 1965. That might appear to be a reference even more obscure than the descendants of Hammurabi, but in fact not so: there is a statistical argument, in other words, to be derived from Sections Two and Five of that act. As legal scholars know, those two sections form the legal basis of what are known as “majority minority districts”: as one scholar has described them, these are “districts where minorities comprise the majority or a sufficient percentage of a given district such that there is a greater likelihood that they can elect a candidate who may be racially or ethnically similar to them.” Since 1965, such districts have increasingly grown, particularly since a 1986 U.S. Supreme Court decision (Thornburg v. Gingles, 478 U.S. 30 (1986) that the Justice Department took to mandate their use in the fight against racism. The rise of such districts are essentially why, although there were fewer than five black congressmen in the United States House of Representatives prior to 1965, there are around forty today: a percentage of congress (slightly less than 10%) not much less than the percentage of black people in the American population (slightly more than 10%). But what appears to be a triumph for black people may not be, so statistics may tell us, for all Americans.

That’s because, according to some scholars, the rise in the numbers of black congressional representatives may also have effectively required a decline in the numbers of Democrats in the House: as one such researcher remarked a few years ago, “the growth in the number of majority-minority districts has come at the direct electoral expense of … Democrats.” That might appear, to many, to be paradoxical: aren’t most African-Americans Democrats? So how can more black reps mean fewer Democratic representatives?

The answer however is provided, again perhaps strangely, by the very question itself: in short, by precisely the fact that most (upwards of 90%) black people are Democrats. Concentrating black voters into congressional districts, in other words, also has the effect of concentrating Democratic voters: districts that elect black congressmen and women tend to see returns that are heavily Democratic. What that means, conversely, that these are votes that are not being voted in other districts: as Steven Hill put the point for The Atlantic in 2013, drawing up majority minority districts “had the effect of bleeding minority voters out of all the surrounding districts,” and hence worked to “pack Democratic voters into fewer districts.” In other words, majority minority districts have indeed had the effect of electing more black people to Congress—at the likely cost of electing fewer Democrats. Or to put it another way: of electing more Republicans.

It’s certainly true that some of the foremost supporters of majority minority districts have been Republicans: for example, the Reagan-era Justice Department mentioned above. Or Benjamin L. Ginsberg, who told the New York Times that such districts were “‘much fairer to Republicans, blacks and Hispanics” in 1992—when he was general counsel of the Republican National Committee. But while all of that is so—and there is more to be said about majority minority districts along these lines—these are only indirectly the reasons why Michael Wilbon is right to fear statistical thought.

That’s because what Michael Wilbon ought to be afraid of about statistical science, if he isn’t already, is what happens if somebody—with all of the foregoing about majority minority districts in mind, as well as the fact that Democrats have historically been far more likely to look after the interests of working people—happened to start messing around in a fashion similar to how Isaac Newton did with those lists of ancient kings. Newton, remember, used those old lists of ancient kings to compare them with more recent, verifiable lists of kings: by comparing the two he was able to make assertions about which lists were more or less likely to be the records of real kings. Nowadays, statistical science has advanced over Newton’s time, though at heart the process is the same: the comparison of two or more data sets. Today, through more sophisticated techniques—some invented by Karl Pearson—statisticians can make inferences about, for example, whether the operations recorded in one data set caused what happened in another. Using such techniques, someone today could use the lists of African-American congressmen and women and begin to compare them to other sets of data. And that is the real reason Michael Wilbon should be afraid of statistical thought.

Because what happens when, let’s say, somebody used that data about black congressmen—and compared it to, I don’t know, Thomas Piketty’s mountains of data about economic inequality? Let’s say, specifically, the share of American income captured by the top 0.01% of all wage earners? Here is a graph of African-American members of Congress since 1965:

Chart of African American Members of Congress, 1967-2012
Chart of African American Members of Congress, 1967-2012

And here is, from Piketty’s original data, the share of American income captured etc.:

Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998
Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998

You may wish to peruse the middle 1980s—perhaps coincidentally, right around the time of Thornburg v. Gingles both take a huge jump. Leftists, of course, may complain that this juxtaposition could lead to blaming African-Americans for the economic woes suffered by so many Americans—a result that Wilbon should, rightly, fear. But on the other hand, it could also lead Americans to realize that their political system, in which the number of seats in Congress are so limited that “majority minority districts” have, seemingly paradoxically, resulted in fewer Democrats overall, may not be much less anachronistic than the system that governed Babylon—a result that, Michael Wilbon is apparently not anxious to tell you, might lead to something of benefit to everyone.

Either thought, however, can lead to only one conclusion: when it comes to the moonshot of American politics, maybe Isaac Newton should still—despite the protests of people like Michael Wilbon—be driving.

Comedy Bang Bang

In other words, the longer a game of chance continues the larger are the spells and runs of luck in themselves,
but the less their relative proportions to the whole amounts involved.
—John Venn. The Logic of Chance. (1888). 

 

“A probability that is very small for a single operation,” reads the RAND Corporation paper mentioned in journalist Sharon McGrayne’s The Theory That Would Not Die, “say one in a million, can become significant if this operation will occur 10,000 times in the next five years.” The paper, “On the Risk of an Accidental or Unauthorized Nuclear Detonation,” was just what it says on the label: a description of the chances of an unplanned atomic explosion. Previously, American military planners had assumed “that an accident involving an H-bomb could never occur,” but the insight of this paper was that overall risk changes depending upon volume—an insight that ultimately depended upon a discovery first described by mathematician Jacob Bernoulli in 1713. Now called the “Law of Large Numbers,” Bernoulli’s thought was that “it is not enough to take one or another observation … but that a large number of them are needed”—it’s what allows us to conclude, Bernoulli wrote, that “someone who intends to throw at once three sixes with three dice, should be considered reckless even if winning by chance.” Yet, while recognizing the law—which predicted that even low-probability events become likely if there are many of them—considerably changed how the United States handled nuclear weapons, it has had essentially no impact on how the United States handles certain conventional weapons: the estimated 300 million guns held by its citizens. One possible reason why that may be, suggests the work of Vox.com founder Ezra Klein, is that arguments advanced by departments of literature, women’s studies, African-American studies and other such academic “disciplines” more or less openly collude with the National Rifle Association to prevent sensible gun control laws.

The inaugural “issue” of Vox contained Klein’s article “How Politics Makes Us Stupid”—an article that asked the question, “why isn’t good evidence more effective in resolving political debates?” According to the consensus wisdom, Klein says, “many of our most bitter political battles are mere misunderstandings” caused by a lack of information—in this view, all that’s required to resolve disputes is more and better data. But, Klein also writes, current research shows that “the more information partisans get, the deeper their disagreements become”—because there are some disagreements “where people don’t want to find the right answer so much as they want to win the argument.” In other words, while some disagreements can be resolved by considering new evidence—like the Strategic Air Command changed how it handled nuclear weapons in light of a statistician’s recall of Bernoulli’s work—some disagreements, like gun control, cannot.

The work Klein cites was conducted by Yale Law School professor Daniel Kahan, along with several co-authors, and it began—Klein says—by collecting 1,000 Americans and then surveying both their political views and their mathematical skills. At that point, Kahan’s group gave participants a puzzle, which asked them to judge an experiment designed to show whether a new skin cream was more or less likely to make a skin condition worse or better, based on the data presented. The puzzle, however, was jiggered: although many more people got better using the skin cream than got worse using the skin cream, the percentage of people who got worse using the skin cream against those who did not use it was actually higher. In other words, if you paid attention merely to numbers, the data might appear to indicate one thing, while a calculation of percentages showed something else. As it turns out, most people relied on the raw numbers—and were wrong; meanwhile, people with higher mathematical skill were able to work through the problem to the right answer.

Interestingly, however, the results of this study did not demonstrate to Kahan that perhaps it is necessary to increase scientific and mathematical education. Instead, Kahan argues that the attempt by “economists and other empirical social scientists” to shear the “emotional trappings” from the debate about gun control in order to make it “a straightforward question of fact: do guns make society less safe or more” is misguided. Rather, because guns are “not just ‘weapons or pieces of sporting equipment,’” but “are also symbols,” the proper terrain to contest is not the grounds of empirical fact, but the symbolic: “academics and others who want to help resolve the gun controversy should dedicate themselves to identifying with as much precision as possible the cultural visions that animate this dispute.” In other words, what ought to structure this debate is not science, but culture.

To many on what’s known as the “cultural left,” of course, this must be welcome news: it amounts to a recognition of “academic” disciplines like “cultural studies” and the like that have argued for decades that cultural meanings trump scientific understanding. As Canadian philosopher Ian Hacking put it some years ago in The Social Construction of What?, a great deal of work in those fields of “study” have made claims that approach saying “that scientific results, even in fundamental physics, are social constructs.” Yet though the point has, as I can speak from personal experience, become virtual commonsense in departments of the humanities, there are several means of understanding the phrase “social construct.”

As English professor Michael Bérubé has remarked, much of that work can be traced as  “following the argument Heidegger develops at the end of the first section of Being and Time,” where the German philosopher (and member of the Nazi Party) argued that “we could also say that the discovery of Neptune in 1846 cold plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” In more general terms New York University professor Andrew Ross—the same Ross later burned in what’s become known as the “Sokal Affair”—described one fashion in which such an argument could go: by tracing how a “scientific theory was advanced through power, authority, persuasion and responsiveness to commercial interests.” Of course, as a journalistic piece by Joy Pullmann—writing in the conservative Federalist—described recently, as such views have filtered throughout the academy they have led at least one doctoral student to claim in her dissertation at the education department of the University of North Dakota that “language used in the syllabi” of eight science classes she reviewed

reflects institutionalized STEM teaching practices and views about knowledge that are inherently discriminatory to women and minorities by promoting a view of knowledge as static and unchanging, a view of teaching that promotes the idea of a passive student, and by promoting a chilly climate that marginalizes women.

The language of this description, interestingly, equivocates between the claim that some, or most, scientists are discriminatory (a relatively safe claim) and the notion that there is something inherent about science itself (the radical claim)—which itself indicates something of the “cultural” view. Yet although, as in this latter example, claims regarding the status of science are often advanced on the grounds of discrimination, it seems to escape those making such claims just what sort of ground is conceded politically by taking science as one’s adversary.

For example, here is the problem with Kahan’s argument over gun control: by agreeing to contest on cultural grounds pro-gun control advocates would be conceding their very strongest argument: the Law of Large Numbers is not an incidental feature of science, but one of its very foundations. (It could perhaps even be the foundation, because science proceeds on the basis of replicability.) Kahan’s recommendation, in other words, might not appear so much as a change in tactics as an outright surrender: it’s only in the light of the Law of Large Numbers that the pro gun-control argument is even conceivable. Hence, it is very difficult to understand how an argument can be won if one’s best weapon is, I don’t know, controlled. In effect, conceding the argument made in the RAND paper quoted above is more or less to give up on the very idea of reducing the numbers of firearms, so that American streets could perhaps be safer—and American lives protected.

Yet another, and even larger-scale problem with taking the so-called “cultural turn,” as Kahan advises, however, is that abandoning the tools of the Law of Large Numbers does not merely concede ground on the gun control issue alone. It also does so on a host of other issues—perhaps foremost of them on matters of political representation itself. For example, it prevents an examination of the Electoral College from a scientific, mathematically-knowledgable point of view—as I attempted to do in my piece, “Size Matters,” from last month. It may help to explain what Congressman Steve Israel of New York meant when journalist David Daley, author of a recent book on gerrymandering, interviewed him on the practical effects of gerrymandering in the House of Representatives (a subject that requires strong mathematical knowledge to understand): “‘The Republicans have always been better than Democrats at playing the long game.’” And there are other issues also—all of which is to say that, by attacking science itself, the “cultural left” may literally be preventing government from interceding on the part of the very people for whom they claim to speak.

Some academics involved in such fields have, in fact, begun to recognize this very point: all the way back in 2004, one of the chiefs of this type of specialist, Bruno Latour, dared to ask himself “Was I wrong to participate in the invention of this field known as science studies?” The very idea of questioning the institution of that field can, however, seem preposterous: even now, as Latour also wrote then, there are

entire Ph.D. programs … still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.

Indeed. It’s actually to the point, in fact, that it would be pretty easy to think that the supposed “left” doesn’t really want to win these arguments at all—that, perhaps, they just wish to go out …

With a bang.

Great! Again?

The utility of a subdivision of the legislative power into different branches … is, perhaps, at the present time admitted by most persons of sound reflection.But it has not always found general approbation; and is, even now, sometimes disputed by men of speculative ingenuity, and recluse habits.
—Joseph Story. Commentaries on the Constitution of the United States. 1833.

 

Nicolas de Caritat, Marquis of Condorcet (17 September 1743 – 28 March 1794)
Nicolas de Caritat, Marquis of Condorcet
(17 September 1743 – 28 March 1794)

We habitually underestimate the effect of randomness,” wrote Leonard Mlodinow of MIT in his 2008 book on the subject: The Drunkard’s Walk: How Randomness Rules Our Lives—so much so, in fact, that “even when careers and millions of dollars are at stake, chance events are often conspicuously misinterpreted as accomplishments or failures.” But while that may be true, it’s often very difficult to know just when chance has intervened; it’s a hard thing to ask people to focus on things that never happened—but could have. Yet while that is so, there remains some identifiable ways in which chance interjects itself into our lives. One of them, in fact, is how Americans pass their laws—an argument that has not only been ongoing for two centuries, but that America is losing.

When, in 1787, the United States wrote its constitution, Edmund Randolph introduced what has since been called “the Virginia Plan”—the third resolution of which asserted that “the national legislature ought to consist of two branches.” Those two branches are now called the Senate and the House of Representatives, which makes the American system of government a bicameral one: that is, one with two legislative houses. Yet, although many Americans tend to think of this structure as, apparently, created with the universe, in fact it is not one that has been widely copied.

“Worldwide,” wrote Betty Drexhage in a 2015 report to the government of the Netherlands, “only a minority of legislatures is bicameral.” More recently the Inter-Parliamentary Union, a kind of trade group for legislatures, noted that, of world governments, 77 are bicameral—while 116 have only one house. Furthermore, expressing that ratio without context over-represents bicameral legislatures: even in countries that have two legislative houses, few of them have houses that are equally powerful, as the American House and Senate are. The British House of Lords, for example—the model for the Senate—has not been on a par politically with the House of Commons, even theoretically, since 1911 at the latest, and arguably since 1832.

Yet, why should other countries have failed to adopt the bicameral structure? Alternately, why did some, including notable figures like Benjamin Franklin, oppose splitting the Congress in two? One answer is provided by an early opponent of bicameralism: the Marquis de Condorcet, who wrote in 1787’s Letters from a Freeman of New Haven to a Citizen of Virginia on the Futility of Dividing the Legislative Power Among Several Bodies that “‘increasing the number of legislative bodies could never increase the probability of obtaining true decisions.’” Probability is a curious word to use in this connection—but one natural for a mathematician, which is what the marquis was.

The astronomer Joseph-Jerôme de Lalande, after all, had “ranked … Condorcet as one of the ten leading mathematicians in Europe” at the age of twenty-one; his early skill attracted the attention of the great Jean d’Alembert, one of the most famous mathematicians of all time. By 1769, at the young age of 25, he was elected to the incredibly prestigious French Royal Academy of Sciences; later, he would work with Leonhard Euler, even more accomplished than the great d’Alembert. The field that the marquis plowed as a mathematician was the so-called “doctrine of chances”—what we today would call the study of probability.

Although in one sense then the marquis was only one among many opponents of bicameralism—his great contemporary, the Abbé Sieyes, was another—very few of them were as qualified, mathematically speaking, to consider the matter as the marquis was; if, as Justice Joseph Story of the United States would write later, the arguments against bicameralism “derived from the analogy between the movements of political bodies and the operations of physical nature,” then the marquis was one of the few who could knowledgeably argue from nature to politics, instead of the other way. And in this matter, the marquis had an ace.

Condorcet’s ace was the mathematical law first discovered by an Italian physician—and gambler—named Gerolamo Cardano. Sometime around 1550, Cardano had written a book called Liber de Ludo Alea; or, The Book on Games of Chance, and in that book Cardano took up the example of throwing two dice. Since the probability of throwing a single number on one die is one in six, the doctor reasoned, then the probability of throwing two of the same number is 1/6 multiplied by 1/6, which is 1/36. Since 1/36 is much, much less likely than 1/6, it follows that it is much less likely that a gambler will roll double sixes than it is that the same gambler will roll a single six.

According to J. Hoffman-Jørgensen of the University of Aarhus, what Cardano had discovered was the law that the “probability that two independent events occurs simultaneously equals the product of their probabilities.” In other words, the chance of two events happening is exponentially less than the chance of either one of those two events—which is why, for example, a perfecta bet in horse racing pays off so highly: it’s much more difficult to choose two horses than one. By the marquis’ time the mathematics was well-understood—indeed, it could not have been not known to virtually anyone with any knowledge of mathematics, much less one of the world’s authorities on the subject.

The application, of course, should be readily apparent: by requiring legislation to pass through two houses rather than one, bicameralism thereby—all by itself—exponentially lessens the chance of legislative passage. Anecdotally, this is something that has been, if imperfectly, well-known in the United States for some time: “Time and again a bill threatening to the South” prior to the Civil War, as Leonard Richards of the University of Massachusetts has pointed out, “made its way through the House only to be blocked in the Senate.” Or, as labor lawyer Thomas Geoghegan once remarked—and he is by no means young—his “old college teacher once said, ‘Just about every disaster in American history is the result of the Senate.’” And as political writer Daniel Lazare pointed out in Slate in 2014, even today the “US Senate is by now the most unrepresentative major legislature in the ‘democratic world’”—because there are two senators from every state, legislation desired by ninety percent of the population can be blocked. Hence, just as the Senate blocked anti-slavery legislation—and much else besides—from passage prior to the Civil War, so too does it continue to function in that role today.

Yet, although many Americans may know—the quotations could be multiplied—that there is something not quite right about the bicameral Congress, and some of them even mention it occasionally, it is very rare to notice any mention of the Marquis de Condorcet’s argument against bicameral legislatures in the name of the law of probability. Indeed, in the United States even the very notion of statistical knowledge is sometimes the subject of a kind of primitive superstition.

The baseball statistician Bill James, for example, once remarked that he gets “asked on talkshows a lot whether one can lie with statistics,” apparently because “a robust skepticism about statistics and their value had [so] permeated American life” that today (or at least, in the 1985 James wrote) “the intellectually lazy [have] adopted the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to.” Whether there is a direct relationship between these two—the political import of the marquis’ argument so long ago, and the much later apprehension of statistics noted by James—is unclear, of course.

That may be about to change, however. James, for example, who was once essentially a kind of blogger before the Internet, has gradually climbed the best-seller lists; meanwhile, his advice and empirical method of thinking has gradually infected the baseball world—until last year the unthinkable happened, and the Chicago Cubs won the World Series while led by a man (Theo Epstein) who held up Bill James as his hero. At the same time, as I’ve documented in a previous blog post (“Size Matters”), Donald Trump essentially won the presidency because his left-wing opponents do not understand the mathematics involved in the Electoral College—or cannot, probably due to the fact of their prior commitment to “culture,” effectively communicate that knowledge to the public. In other words, chance may soon make the argument of the marquis—long conspicuously misinterpreted as a failure—into a sudden accomplishment.

Or perhaps rather—great again.

A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.