A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.        

Bend Sinister

The rebs say that I am a traitor to my country. Why tis this[?] [B]ecause I am for a majority ruling, and for keeping the power in the people[?]
—Jesse Dobbins
Yadkin County, North Carolina
Federal pension application
Adjutant General’s Office
United States Department of War
3 July 1883.

Golf and (the theory of) capitalism were born in the same small country (Scotland) at the same historical moment, but while golf is entwined within the corporate world these days there’s actually a profound difference between the two: for capitalism everything is relative, but the value of a golf shot is absolute. Every shot is strictly as valuable as every other. The difference can be found in the concept of arbitrage—which conventional dictionaries define as taking advantage of a price difference between two markets. It’s at the heart of the financial kind of capitalism we live with these days—it’s why everything is relative under the regime of capitalism—but it’s completely antithetical to golf: you can’t trade golf shots. Still, the concept of arbitrage does explain one thing about golf: how a golf club in South Carolina, in the Low Country—the angry furnace of the Confederacy—could come to be composed of Northern financial types and be named “Secession,” in a manner that suggested its members believed, if only half-jokingly, that the firebrands of 1860 might have not been all wrong.

That, however, gets ahead of starting another golf tournament on the tenth tee. Historically, as some readers may remember, I haven’t done well starting on the tenth hole. To recap: twice I’ve started loops for professional golfers in tournaments on the tenth tee, and each time my pro has blown the first shot of the day out of bounds. So when I saw where we were starting at Oldfield Country Club just outside of Hilton Head in South Carolina, site of an eGolf tournament, my stomach dropped as if I were driving over one of the arched bridges across the housing development’s canals.

Both of those tenth holes were also, coincidentally or not, dog-leg rights; holes that begin at the tee, or upper left so to speak, and move towards the green in a more-or-less curved arc that ends, figuratively, on the lower right. In heraldry, a stripe in such a fashion is called a “bend sinister”: as Vladimir Nabokov put it in explaining the title of his novel by that name, “a bar drawn from the upper left to the lower right on a coat of arms.” My player was, naturally, assigned to start at the tenth tee. My history with such starts went unmentioned.

Superstitious nonsense aside, however, there’s likely reasons why my pros should have had a hard time of a dog-leg right. Very often on a dogleg right trees close off the right side quickly: there’s no room on the right to start the ball there in order to draw it back onto the fairway; which is to say, golfers who draw the ball are at a disadvantage. As this is the typical flight of your better player—while it might be so that the very longest players very often play a “power fade”—it’s perhaps not accidental that marginal players (the only type I, as an unproven commodity, might hope to obtain) ought to be drawers of the ball.

Had I known what I found out later, I might have been more anxious: my golfer had “scrapped … Operation Left to Right”—a project designed to enable him to hit a fade on command—all the way back in 2011, as detailed in a series of Golf Channel articles about him and his struggles in golf’s minor leagues. (“The Minors” golfchannel.com) His favorite ball shape was a draw, a right-to-left shot, which is just about the worst kind of shot you can have on a dogleg-right hole. The tenth at Oldfield had, of course, just that kind of shape.

Already, the sky was threatening, and the air had a chill to it: the kind of chill that can cause the muscles in your hand to be less supple, which can make it just that much harder to “release” the clubhead—which can cause a slice, a left-to-right movement of the ball. Later on my player actually would lose several tee shots to the right, all of them push-fades, including a tough-to-take water ball on the twelfth (our third) hole, a drivable par four.
Eventually the rain would become so bad that the next day the final round would be canceled, which left me at loose ends.

Up past Beaufort there’s a golf club called Secession—a reference to South Carolina’s pride of place with regard to the events leading up to the Civil War: it was the first state to secede, in late December of 1860, and actually helped persuade the other Southern states to secede with it by sending encouraging emissaries to those states. Yet while that name might appear deeply Southern, the membership is probably anything but: Secession, the golf club, is an extremely private course that has become what Augusta began as: a club for the financial guys of New York and Chicago to go to and gamble large sums on golf. Or, to put it another way, the spiritual descendants of the guys who financed Abraham Lincoln’s war.

You might think, of course, that such a place would be somewhat affected by the events of the past five years or so: in fact not, as on the day I stopped in every tee box seemed filled with foursomes, with quite a few filled by loopers carrying doubles. Perhaps I should have known better, since as Chris Lehmann at The Baffler has noted, the “top 1 percent of income earners have taken in fully 93 percent of the economic gains since the Great Recession.” In any case, my errand was unsuccessful: I found out, essentially, that I would need some kind of clout. So, rather than finding my way back directly, I spent a pleasant afternoon in Beaufort. While there, I learned the story of one Robert Smalls, namesake of a number of the town’s landmarks.

“I thought the Planter,” said Robert Smalls when he reached the deck of the USS Onward outside of Charleston Harbor in the late spring of 1862, “might be of some use to Uncle Abe.” Smalls, the pilot, had, along with his crew, stolen the Confederate ship Planter right out from under the Confederate guns by mimicking the Planter’s captain—Smalls knew what the usual signals to leave the harbor were, and by the half-light of dawn he looked sufficiently enough like that officer to secure permission from the sentries at Sumter. (He also knew enough to avoid the minefields, since he’d helped to lay them.) Upon reaching the Union blockade ships on the open Atlantic, Smalls surrendered his vessel to the United States officer in command.

After the war—and a number of rather exciting exploits—Smalls came back to Beaufort, where he bought his former master’s house—a man named McKee—with the bounty money he got for stealing the Planter, and got elected to both the South Carolina House of Representatives and the South Carolina Senate, founding the Republican Party in South Carolina along the way. In office he wrote legislation that provided for South Carolina to have the first statewide public school system in the history of the United States, and then he was elected to the United States House of Representatives, where he became the last Republican congressman from his district until 2010.

Historical tourism in Beaufort thusly means confronting the fact that the entire of the Lowcountry, as it’s called down here, was the center of secessionism. That’s in part why, in a lot of South Carolina, the war ended much earlier than in most of the South, because the Union invaded by sea in late 1861: 80 years before Normandy, in a fleet whose size would not be rivaled until after Pearl Harbor. That’s also why, as the British owner of a bar in the town I’m staying in, Bluffton, notes, the first thing the Yankees did when they arrived in Bluffton was burn in down. It was in order to make a statement similar to the larger point Sherman would later make during his celebrated visit to Atlanta.

The reason for such vindictiveness was because the slaveowners of the Lowcountry were at what their longtime Senator, John Calhoun, had long before called the “furthest outpost” of slavery’s empire. They not only wanted to continue slavery, they wanted to expand its reach—it’s the moral, in fact, of the curious tale of the yacht Wanderer, funded by a South Carolinian. It’s one of those incidents that happened just before the war, one of those incidents whose meaning would only become clear after the passage of time—and Sherman.

The Wanderer was built in 1857 on Long Island, New York, as a pleasure yacht. Her first owner, Col. John Johnson, sailed her down the Atlantic coast to New Orleans, then sailed her back to New York where a William Corrie, of Charleston, South Carolina, bought her. Corrie made some odd alterations to the ship—adding, for instance, a 15,000 gallon water tank. The work attracted the attention of federal officers aboard the steam revenue cutter USS Harriet Lane, who seized the ship when she attempted to leave New York harbor on 9 June 1858—as a suspected slave ship. But there was no other evidence of the intentions of her owner other than the basic alterations, and so the Wanderer was released. She arrived in Charleston on 25 June, completed her fitting out as a slave ship and, after a stop in Port of Spain, Trinidad, sailed for the Congo on 27 July. The Wanderer returned to the United States on 28 November, at Jekyll Island in Georgia, still in the Lowcountry.

The ship bore a human cargo.

Why, though, would William Corrie—and his partners, including the prominent Savannah businessman Charles Lamar, a member of a family that “included the second president of the Republic of Texas, a U.S. Supreme Court justice, and U.S. Secretary of the Treasury Howell Cobb”—have taken so desperate a measure as to have attempted to smuggle slaves into the United States? The slave trade had been banned in the United States since 1808, as per the United States Constitution, which is to say that importing human beings for the purpose of slavery was a federal crime. The punishment was death by hanging.

Ultimately, Corrie and his partners evaded conviction—there were three trials, all held in Savannah, all of which ended with a Savannah jury refusing to convict their local grandees. Oncoming events would, to be sure, soon make the whole episode beside the point. Still, Corrie and Lamar could not have known that, and on the whole the desperate crime seems rather a long chance to take. But the syndicate, led by Lamar, had two motives: one economic, and the other ideological.

The first motive was grasped by Thomas Jefferson, of all people, as early as 1792. Jefferson memorialized his thought, according to the Smithsonian magazine, “in a barely legible, scribbled note in the middle of a page, enclosed in brackets.” The earth-shaking, terrible thought was this: “he was making a 4 percent profit every year on the birth of black children.” In other words, like the land which his slaves worked, every year brought an increase to the value of Jefferson’s human capital. The value of slaves would, with time, become almost incredible: “In 1860,” historian David Brion Davis has noted, “the value of Southern slaves was about three times the amount invested in manufacturing or railroads nationwide.” And that value was only increased by the ban on the slave trade.

First, then, the voyage of the Wanderer was an act of economic arbitrage, which sought to exploit the price difference between slaves in Africa and those in the United States. But it was also an act of provocation—much like John Brown’s raid on Harper’s Ferry less than a year after the Wanderer landed in Georgia. Like the more celebrated case, the sailing of the Wanderer was meant to demonstrate that slave smuggling could be done—it was meant to inspire further acts of resistance to the Slave Importation Act.

Lamar was after all a Southern “firebrand,” common in the Lowcountry and represented in print by the Charleston Mercury. The firebrands advocated resuming the African slave trade: essentially, the members of this group believed that government shouldn’t interfere with the “natural” process of the market. Southerners like Lamar and Corrie, thusly, were the ancestors to those who today believe that, in the words of Italian sociologist Marco d’Eramo, “things would surely improve if only we left them to the free play of market forces.”
The voyage of the Wanderer was, in that sense, meant to demonstrate the thesis that, as Thomas Frank observed about how the ideological descendants of these forebears put it, that “it is the nature of government enterprises to fail.” The mission of the slave ship, that is, could be viewed as on a par with what Frank calls conservative cautions “against bringing top-notch talent into government service” or piling up “an Everest of debt in order to force the government into crisis.” The notion that the yacht’s trip was wholly contrived must have been lost on the Wanderer’s sponsors.

Surely, then, it isn’t difficult to explain the reasoning behind the appeal of a certain kind of South Carolinian thought and that of wealthy people today. What’s interesting about the whole episode, at least from today’s standpoint, is how it was ultimately defeated: by what, at least from one perspective, appears to be another case of arbitrage. In this case, the arbitrageur was named Abraham Lincoln, and he laid out what he was going to arbitrage long before the voyage of the Wanderer. It was in a speech in Peoria in the autumn of 1854, the speech that marked Lincoln’s return to politics after his defeat in the late 1840s after his opposition to the Mexican War. In that speech, Lincoln laid the groundwork for the defeat of slavery by describing how slavery had artificially interfered with a market—the one whose currency is votes.

The crucial passage of the Peoria speech begins when Lincoln begins to compare two states: South Carolina being one, likely not so coincidentally, and Maine being the other. Both states, Lincoln observes, are equally represented in Congress: “South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine.” “Thus in the control of the government,” Lincoln concludes, “the two States are equals precisely.” But, Lincoln goes on to note, observe the numbers of their free people: “Maine has 581,813—while South Carolina has 274,567.” Somehow, then, the Southern voter “is more than double of any one of us in this crowd” in terms of control of the federal government: “it is an absolute truth, without an exception,” Lincoln said, “that there is no voter in any slave State, but who has more legal power in the government than any voter in any free State.” There was, in sum, a discrepancy in value—or what economists might call an “inefficiency.”

The reason for that discrepancy was, as Lincoln also observed, “in the Constitution”—by which he referred to what’s become known as the “Three-Fifths Compromise,” or Article One, Section 2, Paragraph 3: “Representatives and direct Taxes shall be apportioned among the several States … according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons … [and] three fifths of all other Persons.” By this means, Southern states received representation in the federal government in excess of the number of their free inhabitants: in addition to the increase in wealth obtained by the reproduction of their slaves, then, slaveowners also benefitted politically.

In an article for the New York Times’ series Disunion (“The Census of Doom”), which is blogging the Civil War as it happened, Adam Goodheart observes that over the decade between the 1850 United States Census, however, as and the 1860 edition of same, the population of the North had exploded by 41 percent, while that of the South had only grown by 27 percent. (By comparison, Goodheart points out, between 2000 and 2010 the United States population grew by just 9.7 percent.) To take one state as an example, in less than 25 years one Northern state—Wisconsin—had grown by nearly 6400 (sic) percent. Wisconsin would, of course, go heavily for Lincoln in the presidential election—Lincoln would be the first president ever elected without the support of a single Southern state. (He wasn’t even on the ballot in most.) One Northern newspaper editor, Goodheart notes, smugly observed that “The difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” Lincoln’s election confirmed the fact that the political power held by the Southern states since the nation’s founding, with the help of an electoral concession, had been broken by a wash of new Northern voters.

If read in that light, then, the Thirteenth and Fourteenth Amendments to the Constitution, which ended both slavery and the Three Fifths Clause, could be understood as a kind of price correction: the two amendments effectively ended the premium that the Constitution had until then placed on Southern votes. Lincoln becomes a version of Brad Pitt’s character in the movie of Michael Lewis’ most famous book—Billy Beane in Moneyball. Just as Billy Beane saw—or was persuaded to see—that batting average was overvalued and on-base percentage was undervalued, thus creating an arbitrage possibility for players who walked a lot, Lincoln saw that Southern votes were too highly valued and Northern ones too undervalued, and that (sooner or later) the two had to converge towards what economists would call “fundamental value.”

That concept is something that golf teaches well. In golf, there are no differences in value to exploit: each shot has just the same fundamental value. On our first tee that day, which was the tenth hole at Oldfield Country Club, my golfer actually didn’t blow his first shot out-of-bounds—though I had fully expected that to happen. He did come pretty close though: it flew directly into the trees, a slicing, left-to-right block. I took off after everyone had teed off: clearly the old guy who was marshaling the hole wasn’t going to be of much help. But I found the ball easily enough, and my player pitched out and ended up making a great par save. The punch-out shot from the trees counted just the same as an approach shot might have, or as a second putt.

Understanding that notion of fundamental value taught by golf—among other possible human acts—allows the further understanding that the “price correction” undertaken by Lincoln wasn’t simply a one-time act: the value of an American vote still, today, varies across the nation. According to the organization FairVote, as of 2003 a vote in Wyoming was more than three times more valuable than, say, my vote as a resident of the state of Illinois. Even today—as the Senate’s own website notes—“senators from the twenty-six smallest states, who (according to the 2000 census) represent 17.8% of the nation’s population, constitute a majority of the Senate.” It’s a fact that the men of the Secession Golf Club might just as well people ignored—because it just may be why 93 percent of the wealth since the Great Recession has gone to the wealthy.

To take a small example of how the two points might be connected, a recent New Yorker piece has pointed out that “in the fifth year of his Presidency, Obama has failed to place even a single judge on the D.C. Circuit, considered the second most important court in the nation” because the Senate has refused to confirm any of his nominees. This despite the fact that there are now four vacancies out of eleven seats. Why? Because the Senate’s rules allow a minority of Senators—or even just one, in the case of what’s known as the “hold”—to interfere with the will of the majority: an advantage Republican senators have not hesitated to seize.

Nearly twenty years after the publication of Bend Sinister, Nabokov chose to write an introduction in which he endeavored to explain the novel’s name. “This choice of title,” he wrote, “was an attempt to suggest an outline broken by refraction, a distortion in the mirror of being, a wrong turn taken by life, a sinistral and sinister world.” If there are wrong turns, of course, that would suggest that there are right ones; if there are “distortions,” then there are clarities: that is, there is an order to which events will (eventually, sooner or later) return. It’s a suggestion that is not fashionable these days: Nabokov himself isn’t read much today for his own beliefs so much as for the confirmation his novels can provide for one or another thesis. But if he is right—if golf’s belief in “fundamental value” is right—then there must necessarily come some correction to this ongoing problem of the value of a vote.

The location of the new Fort Sumter, however, remains unknown.