I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

Advertisements

Thought Crimes

 

How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?
Sherlock Holmes
    The Sign of Four (1890).

 

Whence heavy persecution shall arise
On all, who in the worship persevere
Of spirit and truth; the rest, far greater part,
Will deem in outward rites and specious forms
Religion satisfied; Truth shall retire
Bestuck with slanderous darts, and works of faith
Rarely be found: So shall the world go on …
John Milton
   Paradise Lost
   Book XII 531-37

 

When Tiger Woods, just after four o’clock Eastern time, hit a horrific duck-hook tee shot on Augusta National’s 13th hole during the third round of the Masters tournament Saturday, the golfer sent one of George Carlin’s “seven dirty words” after it, live on air. About an hour later, around a quarter after five, the announcer Ian Baker-Finch caught himself from uttering a taboo phrase: although he began by saying “back,” the Australian quickly corrected himself by saying “second nine.” To the novice Masters viewer the two misuses of language might appear quite different (Finch-Baker’s slip, that is, being far less offensive), but longtime viewers are aware that, had Baker-Finch not saved himself, his error would have been the more serious incident—to the extent, in fact, that he might have lost his job. Just why that is so is difficult to explain to outsiders unfamiliar with Augusta National’s particular vision of decorum; it may, however, perhaps be explained by one of the broadcast’s few commercials; an advert whose tagline connects a golf commentator’s innocent near-mistake to an argument about censorship conducted at the beginning of this year—in Paris, at the business end of a Kalashnikov.

France is a long way from Georgia, however, so let’s begin with how what Ian Baker-Finch almost said would have been far worse than Tiger’s f-bombs. In the first place that is because, as veterans of watching the Masters know, the announcing team is held to very strict standards largely unique to this sporting event. Golf is, in general, far more concerned with “decorum” and etiquette than other sports—it is, as its enthusiasts often remark, the only one where competitors regularly call penalties on themselves—but the Masters tournament examines the language of its broadcasters to an extent unknown even at other golf tournaments.

In 1966, for example, broadcaster Jack Whittaker—as described in the textbook, Sports Media: Planning, Production, and Reporting— “was canned for referring to Masters patrons as a ‘mob,’” while in 1994 Gary McCord joked (as told by Alex Myers in Golf Digest) “that ‘bikini wax’ is used to make Augusta National’s greens so slick”—and was unceremoniously dumped. Announcers at the Masters, in short, are well-aware they walk a fine line.

Hence, while Baker-Finch’s near-miss was by no means comparable to McCord’s attempts at humor, it was serious because it would have broken a known one of the “Augusta Rules,” as John Feinstein called them in Moment of Glory: The Year Underdogs Ruled Golf. “There are no front nine and back nine at Augusta but, rather, a first nine and a second nine,” Feinstein wrote; a rule that, it’s said, developed because the tournament’s founders, the golfer Bobby Jones and the club chairman Clifford Roberts, felt “back nine” sounded too close to “back side.” The Lords of Augusta, as the club’s members are sometimes referred to, will not stand for “vulgarity” from their announcing team—even if the golfers they are watching are sometimes much worse.

Woods, for example (as the Washington Post reported), “followed up a bad miss left off the 13th tee with a curse word that was picked up by an on-course microphone, prompting the CBS announcers to intone, ‘If you heard something offensive at 13, we apologize.’” Yet while even had Baker-Finch uttered the unutterable, he would only have suggested what Woods baldly verbalized, it’s unimaginable that Woods could suffer the same fate as a CBS announcer would, or be penalized in any way. The uproar that would follow if, for instance, the Lords decided to ban Tiger from further tournaments would make all previous golf scandals appear tame.

Undoubtedly, the difference in treatment conceivably could be justified by the fact that Woods is a competitor (and four-time winner) in the tournament while announcers are ancillary to it. In philosophic terms, players are essential while announcers are contingent: players just are the tournament because without them, no golf. That isn’t as possible to say about any particular broadcaster (though, when it comes to Jim Nantz, lead broadcaster since 1986, it might be close). From that perspective then it might make sense that Tiger’s “heat-of-the-moment” f-bombs are not as significant as a slip of the tongue by an announcer trained to speak in public could be.

Such, at least, might be a rationale for the differing treatment accorded golfers and announcers: so far as I am aware, neither the golf club nor CBS has come forward with an explanation regarding the difference. It was while I was turning this over in my mind that one of the tournament broadcast’s few commercials came on—and I realized just why the difference between Tiger’s words and, say, Gary McCord’s in 1994 caught in my brain.

The ad in question consisted of different people reciting, over and over again, a line once spoken by IBM pioneer Thomas Watson in 1915: “All of the problems of the world could be settled easily if men were only willing to think.” Something about this phrase—repeated so often it became quite literally like a mantra, defined as a “sacred utterance, numinous sound” by Wikipedia—rattled something in my head, which ignited a slight Internet investigation: it seems that, for IBM, that last word—think—became a catchword after 1915; the word was plastered on company ephemera like the name of the company magazine and even, in recent times, becoming the basis for the name of such products as the Thinkpad. The sentence, it could be said, is the official philosophy of the company.

As philosophies go it seems inarguable that this is rather a better one than, for instance, one that might demand “silence your enemies wherever possible.” It is, one might say, a hopeful sentence—if only people were willing to use their rationality, the difficult and the intractable could be vanquished. “Think,” in that sense, is a sentiment that seems quite at odds with the notion of censorship: without airing what someone is thinking, it appears impossible to believe that anything could be settled. In order to get people to think, it seems inarguable that they must be allowed to talk.

Such, at least, is one of the strongest pillars of the concept of “free speech,” as the English and law professor Stanley Fish has pointed out. Fish quotes, as an example of the argument, the Chairman of the National Endowment for the Humanities, James A. Leach, who gave a speech in 2009 claiming that “the cornerstone of democracy is access to knowledge.” In other words, in order to achieve the goal outlined by Watson (solving the world’s problems), it’s necessary to put everyone’s views in the open in order that they might be debated—a notion usually conceptualized, in relation to American law, as the “marketplace of ideas.”

That metaphor traces back to American Supreme Court justice Oliver Wendell Holmes, Jr.’s famous dissent in a case called Abrams v. United States, decided in 1919. “The ultimate good desired,” as Holmes wrote in that case (interestingly, in the light of his theory, against the majority opinion), “is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” That notion, in turn, can (as Fish observes) be followed back to English philosopher John Stuart Mill, and even beyond

“We can never be sure that the opinion we are endeavoring to stifle is a false opinion,” Mill wrote in his On Liberty, “and if we were sure, stifling it would be an evil still.” Yet further back,  the thought connects to John Milton’s Areopagitica, where the poet wrote “Let [Truth] and Falsehood grapple; who ever knew Truth put to the worse in a free and open encounter?” That is, so long as opinions can be freely shared, any problem could in principle be solved—more or less Thomas Watson’s point in 1915.

Let’s be clear, however, what is and what is not being said. That is, the words “in principle” above are important because I do not think that Watson or Mills or Milton or Holmes would deny that there are many practical reasons why it might be impossible to solve problems with a meeting or a series of meetings. No one believes, for instance, that the threat of ISIS could be contained by a summit meeting between ISIS and other parties—the claim that Holmes & Watson (smirk) et al. would make is just that the said threat could be solved if only that organization’s leaders would agree to a meeting. Merely objecting that many times such conceivable meetings are not practical isn’t, in that sense, an strong objection to the idea of the “idea market”—which asserts that in conditions of what could be called “perfect communication” disagreement is (eventually) impossible.

That however is precisely why Fish’s argument against the “market” metaphor is such a strong one: it is Fish’s opinion that the “marketplace” metaphor is just that—a metaphor, not a bedrock description of reality. In an essay entitled “Don’t Blame Relativism,” in fact, Fish apparently denies “the possibility of describing, and thereby evaluating” everything “in a language that all reasonable observers would accept.” That is, he denies the possibility that is imagined by Thomas Watson’s assertion regarding “[a]ll of the problems of the world”: the idea that, were only everyone reasonable, all problems could be solved.

To make the point clearer, while in Watson’s metaphor (which is also Milton’s and Mills’ and Holmes’), in theory everything can be sorted out if only everyone came to the bargaining table, to Fish such a possibility is not only practically impossible, but also theoretically impossible. Fish’s objection to the “market” idea isn’t just that it is difficult, for instance, to find the right translators to speak to different sides of a debate in their own language, but that even were all conditions for perfect communication met, that would not guarantee the end of disagreement.

It’s important to note at this point that this is a claim Fish needs to make in order to stick his argument, because if all he does is advance historically-based arguments to the effect that at no point in human history has the situation described by Watson et al. ever existed, their partisans can counterclaim that just because no one has yet seen perfect communication, that’s no reason to think it might not someday be possible. Such partisans might, for example, quote Alice Calaprice’s The Quotable Einstein, which asserts that Einstein once remarked that “No amount of experimentation can prove me right; a single experiment can prove me wrong.” Or, as the writer Nassem Nicholas Taleb has put the same point while asserting that it ultimately traces back through John Stuart Mill to David Hume: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” In other words, Fish could be right that no such perfect communication has ever existed, but it would be logically inconsistent to try to claim that such evidence implies that it could never be possible.

To engage his opponents, then, Fish must take to the field of “theory,” not just adduce historical examples. That is why Fish cannot just claim that, historically, even regimes that claim to follow the creed of Watson and Holmes and so on in theory do not actually follow that creed in reality, though he does make that argument. He points out, for instance, that even in the Areopagitica, otherwise a passionate defense of “free speech,” Milton allowed that while “free speech” is all well and good for most people most of the time, he does not mean to imply “tolerated popery” (i.e., Catholics), because as that religion (according to Milton) “extirpates all religious and civil supremacies, so itself should be extirpate.”

In other words, Milton explains that anything that threatens the idea of “free speech” itself—as Catholicism, in Milton’s day arguably in the throes of the Inquisition, did so threaten—should not be included in the realm of protected speech, since that “which is impious or evil absolutely against faith or manners no law can possibly permit that intends not to unlaw itself.” And while it might be counterclaimed that in Milton’s time “free speech” was imperfectly realized, Fish also demonstrates that while Catholicism no longer constitutes a threat to modern “free speech” regimes, there are still exceptions to what can be said publicly.

As another American Supreme Court justice, Robert Jackson, would put the point centuries later, “the constitutional Bill of Rights”—including, one presumes, the free-speech-protecting First Amendment—is not “a suicide pact.” Or, as Fish himself put the same point, even today the most tolerant governments still ask themselves, regarding speech, “would this form of speech or advocacy, if permitted to flourish, tend to undermine the very purposes for which our society is constituted?” No government, in other words, can allow the kind of speech that threatens to end the practice of free speech itself.

Still, that is not enough to disrupt the “free speech” argument, because even if it has not been exemplified yet on this earth, that does not mean that it could not someday. To make his point, Fish has to go further; which he does in an essay called “There’s No Such Thing As Free Speech, And It’s A Good Thing Too.”

There, Fish says that he is not merely claiming that “saying something … is a realm whose integrity is sometimes compromised by certain restrictions”—that would be the above argument, where historical evidence is advanced—but rather “that restriction, in the form of an underlying articulation of the world that necessarily (if silently) negates alternatively possible articulations, is constitutive of expression.” The claim Fish wants to make in short—and it is important to see that it is the only argument that can confront the claims of the “marketplace of ideas” thesis—is that restrictions, such as Milton’s against Catholicism, aren’t the sad concessions we must make to an imperfect world, but are in fact what makes communication possible at all.

To those who take what’s known as a “free speech absolutism” position, such a notion might sound deeply subversive, if not heretical: the answer to pernicious opinions, in the view of the free speech absolutist, is not to outlaw them, but to produce more opinions—as Oliver Wendell Holmes, Mill, and Milton all advise. The headline of an editorial in Toronto’s Globe and Mail puts the point elegantly: “The lesson of Charlie Hebdo? We need more free speech, not less.” But what Fish is saying could be viewed in the light of the narrative described by the writer Nassim Nicholas Taleb about how he derived his saying regarding “black swans” under the influence of John Stuart Mill and David Hume.

Taleb says that while “Hume had been irked by the fact that science in his day … had experience a swing from scholasticism, entirely based on deductive reasoning” to “an overreaction into naive and unstructured empiricism.” The difficulty, as Hume recognized, “is that, without a proper method”—or, as Fish might say, a proper set of constraints—“empirical observations can lead you astray.” It’s possible, in other words, that amping up production of truths will not—indeed, perhaps can not—produce Truth.

In fact, Taleb argues (in a piece entitled “The Roots of Unfairness: the Black Swan in Arts and Literature”) that in reality, rather than the fantasies of free speech absolutists, the production of very many “truths” may tend to reward a very few examples at the expense of the majority—and that thusly “a large share of the success” of those examples may simply be due to “luck.” The specific market Taleb is examining in this essay is the artistic and literary world, but like many other spheres—such as “economics, sociology, linguistics, networks, the stock market”—that world is subject to “the Winner-Take-All effect.” (Taleb reports Robert H. Frank defined that effect in his article, “Talent and the Winner-Take-All Society,” as “markets in which a handful of top performers walk away with the lion’s share of total rewards.”) The “free speech absolutist” position would define the few survivors of the “truth market” as being, ipso facto, “the Truth”—but Taleb is suggesting that such a position takes a more sanguine view of the market than may be warranted.

The results of Taleb’s investigations imply that such may be the case. “Consider,” he observes, “that, in publishing, less than 1 in 800 books represent half of the total unit sales”—a phenomenon similar to that found by Art De Vany at the cinema in his Hollywood Economics. And while those results might be dismissed as subject to crass reasons, in fact the “academic citation system, itself supposedly free of commercialism, represents an even greater concentration” than that found in commercial publishing, and—perhaps even yet more alarmingly—there is “no meaningful difference between physics and comparative literature”: both display an equal amount of concentration. In all these fields, a very few objects are hugely successful, while the great mass sink like stones into the sea of anonymity.

The replication of these results do not confine themselves simply to artistic or scientific production; they are, in fact, applicable to subjects as diverse as the measurement of the coast of England to the error rates in telephone calls. George Zipf, for example, found that the rule applied to the “distribution of words in the vocabulary,” while Vilfredo Pareto found it applied to the distribution of income in any give society.

“Now,” asks Taleb, “think of waves of one meter tall in relation to waves of 2 meters tall”—there will inevitably be many more one meter waves than two meter waves, and by some magic the ratio between the two will be invariant, just as, according to what linguists call “Zipf’s Law,” “the most frequent word [in a given language] will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word,” and so on. As the Wikipedia entry for Zipf’s Law (from which the foregoing definition is taken) observes, the “same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, ranks of number of people watching the same TV channel, and so on.” All of these subjects are determined by what have come to be known as power laws—and according to some researchers, they even apply to subjects as seemingly immune to them as music.

Zipf himself, in order to explain the distribution he discovered among words, proposed that it could be explained by a kind of physical process, rather than discernment on the part of language-users: “people aim at minimizing effort in retrieving words; they are lazy and remember words that they have used in the past, so that the more a word is used, the more likely it is going to be used in the future, causing a snowball effect.” The explanation has an intuitive appeal: it appears difficult to argue that “the” (the most common English word) communicates twice as much information as “be” (the second-most common English word). Still less does such an argument explain why those word distributions should mirror the distributions of American cities, say, or the height of the waves on Hawaii’s North Shore, or the metabolic rates of various mammals. The widespread appearance of such distributions, in fact, suggests that rather than being determined by forces “intrinsic” to each case, the distributions are driven by a natural law that cares nothing for specifics.

So far, it seems, “we have no clue about the underlying process,” as Taleb says. “Nothing can explain why the success of a novelist … bears similarity to the bubbles and informational cascades seen in the financial markets,” much less why both should “resemble the behavior of electricity power grids.” What we can know is that, while according to the “free speech absolutist” position “one would think that a larger size of the population of producers would cause a democratization,” in fact “it does not.” “If anything,” Taleb notes, “it causes even more clustering.” The prediction of the “free speech absolutist” position suggests that the production of more speech results in a closer approximation of the Truth; experiential results, however, suggest that more production results merely in a smaller number of products becoming more successful for reasons that may have nothing to do with their intrinsic merits.

These results suggest that perhaps Stanley Fish has it right about “free speech,” and thus that the Lords of Augusta—like their spiritual brethren who shot up the offices of Charlie Hebdo in early January this year—have it completely right in the tight rein they hold over the announcers that work their golf tournament: Truth could be the result of, not the enemy of, regulation. The irony, of course, is that such also suggests the necessity of regulation in areas aside from commentary about golf and golfers—a result that, one suspects, is not only one not favored by the Lords of the Masters, but puts them in uncomfortable company. Allahu akbar, presumably, sounds peculiar with a Southern accent.