A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.