A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.