No Hurry

The man who is not in a hurry will always see his way clearly; haste blunders on blindly.
—Titus Livius (Livy). Ab Urbe Condita. (From the Foundation of the City.) Book 22.

Just inland from the Adriatic coast, northwest of Bari, lies the little village of Canne. In Italian, the name means “reeds”; a non-descript name for a non-descript town. But the name has outlived at least one language, and will likely outlive another, all due to one August day more than 2000 years ago, where two ways of thinking collided; the conversation marked by that day has continued until now, and likely will outlive us all. One line of that conversation was taken up recently by a magazine likely as obscure as the village to most readers: Parameters, the quarterly publication of the U.S. Army War College. The article that continues the conversation whose earliest landmark may be found near the little river of  Ofanto is entitled “Intellectual Capital: A Case for Cultural Change,” and the argument of the piece’s three co-authors—all professors at West Point—is that “recent US Army promotion and command boards may actually penalize officers for their conceptual ability.” It’s a charge that, if true, ought first to scare the hell out of Americans (and everyone else on the planet), because it means that the single most fearsome power on earth is more or less deliberately being handed over to morons. But it ought, second, to scare the hell out of people because it suggests that the lesson first taught at the sleepy Italian town has still not been learned—a lesson suggested by two words I withheld from the professors’ charge sheet.

Those words? “Statistical evidence”: as in, “statistical evidence shows that recent US Army promotion and command boards …” What the statistical evidence marshaled by the West Pointers shows, it seems, is that

officers with one-standard-deviation higher cognitive abilities had 29 percent, 18 percent, and 32 percent lower odds, respectively, of being selected early … to major, early to lieutenant colonel, and for battalion command than their one-standard-deviation lower cognitive ability peers.

(A “standard deviation,” for those who don’t know—and the fact that you don’t is part of the story being told here—is a measure of how far from the mean, or average, a given set of data tends to spread: a low standard means that the data tends to cluster pretty tightly, like a river in mountainous terrain, whereas a high measure means that the data spreads widely, like river’s delta.) The study controlled for gender, ethnicity, year group, athleticism, months deployed, military branch, geographic region, and cumulative scores as cadets—and found that “if two candidates for early promotion or command have the same motivation, ethnicity, gender, length of Army experience, time deployed, physical ability, and branch, and both cannot be selected, the board is more likely to select the officer with the lower conceptual ability.” In other words, in the Army, the smarter you are, the less likely you are to advance quickly—which, obviously, may affect just how far you are likely to go at all.

That may be so, you might say, but maybe it’s just that smarter people aren’t very “devoted,” or “loyal” (or whatever sort of adjective one prefers), at least according to the military. This dichotomy even has a name in such circles: “Athens” vs. “Sparta.” According to the article, “Athens represents an institutional preference for intellectual ability, critical thinking, education, etc.,” while conversely “Sparta represents an institutional preference for motivation, tactical-ability, action-bias, diligence, intensity, physicality, etc.” So maybe the military may not be promoting as many “Athenians” as “Spartans”—but maybe the military is a more “Spartan” organization than others. Maybe this study is just a bunch of Athenians whining about not being able to control every aspect of life.

Yet, if thought about, that’s a pretty weird way to conceptualize things: why should “Athens” be opposed to “Sparta” at all? In other words, why should it happen that the traits these names attempt to describe are distributed in zero-sum packages? Why should it be that people with “Spartan” traits should not also possess “Athenian” traits, and vice versa? The whole world supposedly divides along just these lines—but I think any of us knows someone who is neither of these, and if so then it seems absurd to think that possessing a “Spartan” trait implies a lack of a corresponding “Athenian” one. As the three career Army officers say, “motivation levels and cognitive ability levels are independent of each other.” Just because someone is intelligent does not mean they are likely to be unmotivated; indeed, it makes more sense to think just the opposite.

Yet, apparently, the upper levels of the U.S. military think differently: they seem to believe that devotion to duty precludes intelligence, and vice versa. We know this not because of stereotypes about military officials, but instead because of real data about how the military allocates its promotions. In their study, the three career Army officers report that they

found significant evidence that regardless of what motivation/diligence category officers were in (low, medium, or high) there was a lower likelihood the Army would select the officers for early promotion or battalion command the higher their cognitive ability, despite the fact that the promotion and selection boards had no direct information indicating each officer’s cognitive ability. (Emp. added).

This latter point is so significant that I highlight it: it demonstrates that the Army is—somehow—selecting against intelligence even when it, supposedly, doesn’t know whether a particular candidate has it or not. Nonetheless, the boards are apparently able to suss it out (which itself is a pretty interesting use of intelligence) in order to squash it, and not only that, squash it no matter how devoted a given officer might be. In sum, these boards are not selecting against intelligence because they are selecting for devotion, or whatever, but instead are just actively attempting to promote less-intelligent officers.

Now, it may then be replied, that may be so—but perhaps fighting wars is not similar to doing other types of jobs. Or as the study puts it: perhaps “officers with higher intellectual abilities may actually make worse junior officers than their average peers.” If so, as the three career Army officers point out, such a situation “would be diametrically opposed to the … academic literature” on leadership, which finds a direct relationship between cognitive ability and success. Even so, however, perhaps war is different: the “commander of a top-tier special operations selection team,” the three officers say, reported that his team rejected candidates who scored too high on a cognitive ability test, on the grounds that such candidates “‘take too long to make a decision’”—despite the fact that, as the three officers point out, “research has shown that brighter people come up with alternatives faster than their average-conceptual-level peers.” Thinking that intelligence inhibits action, in other words, would make war essentially different from virtually every other human activity.

Of course, had that commander been in charge of recruitment during the U.S. Civil War, that would have meant not employing an alcoholic, cashiered (fired) former lieutenant later denounced as “an unimaginative butcher in war and a corrupt, blundering drunkard in peace,” a man who failed in all the civilian jobs he undertook, as a farmer and even a simple store clerk, and came close to bankruptcy several times over the course of his life. That man was Ulysses S. Grant—the man about whom Abraham Lincoln would say, when his critics pointed to his poor record, “I cannot spare this man; he fights!” (In other words, he did not hesitate to act.) Grant would, as is known, eventually accept his adversary, Lee’s, surrender at Appomattox Court House; hence, a policy that runs the risk of not finding Grant in time appears, at best, pretty cavalier.

Or, as the three career Army officers write, “if an organization assumes an officer cannot be both an Athenian and a Spartan, and prefers Spartans, any sign of Athenians will be discouraged,” and so therefore “when the Army needs senior officers who are Athenians, there will be only Spartans remaining.” The opposite view somehow thinks that smart people will still be around when they are needed—but when they are needed, they are really needed. Essentially, this view is more or less to say that the Army should not worry about its ammunition supply, because if something ever happened to require a lot of ammunition the Army could just go get more. Never mind the fact that, at such a moment, everyone else is probably going to want some ammunition too. It’s a pretty odd method of thinking that treats physical objects as more important than the people who use them—after all, as we know, guns don’t kill people, people do.

Still, the really significant thing about Grant is not he himself, but rather that he represented a particular method of thinking: “I propose to fight it out on this line, if it takes all summer,” Grant wrote to Abraham Lincoln in May 1864; “Hold on with a bulldog grip, and chew and choke as much as possible,” Lincoln replied to Grant a few months later. Although Grant is, as above, sometimes called a “butcher” who won the Civil War simply by firing more bodies at the Confederacy than the Southerners could shoot, he clearly wasn’t the idiot certain historians have made him out to be: the “‘one striking feature about Grant’s [written] orders,’” as another general would observe later, was that no “‘matter how hurriedly he may write them in the field, no one ever had the slightest doubt as to their meaning, or even has to read them over a second time to understand them.’” Rather than being unintelligent, Grant had a particular way of thinking: as one historian has observed, “Grant regard[ed] his plans as tests,” so that Grant would “have already considered other options if something doesn’t work out.” Grant had a certain philosophy, a method of both thinking and doing things—which he more or less thinks of as the same thing. But Grant did not invent that method of thinking. It  was already old when a certain Roman Senator conceived of a single sentence that, more or less, captured Grant’s philosophy—a sentence that, in turn, referred to a certain village near the Adriatic coast.

The road to that village is, however, a long one; even now we are just more than halfway there. The next step taken upon it was by a man named Quintus Fabius Maximus Verrocusus—another late bloomer, much like Grant. According to Plutarch, whose Parallel Lives sought to compare the biographies of famous Greeks and Romans, as a child Fabius was known for his “slowness in speaking, his long labour and pains in learning, his deliberation in entering into the sports of other children, [and] his easy submission to everybody, as if he had no will of his own,” traits that led many to “esteem him insensible and stupid.” Yet, as he was educated he learned to make his public speeches—required of young aristocratic Romans—without “much of popular ornament, nor empty artifice,” and instead with a “great weight of sense.” And also like Grant, who in the last year of the war faced a brilliant opponent general in Robert E. Lee, Fabius would eventually face an ingenious military leader who desired nothing more than to meet his adversary in battle—where that astute mind could destroy the Roman army in a single day, and, so possibly win the freedom of his nation.

That adversary was Hannibal Barca, the man who had marched his army, including his African war elephants, across the Alps into Italy. Hannibal was a Carthaginian, a Phoenician city on the North African coast that had already fought one massive war with Rome (the First Punic War) and had now, through Hannibal’s invasion, embarked on a second. Carthage was about as rich and powerful as Rome was itself, so by invading Hannibal posed a mortal threat to the Italians—not least because Hannibal had quite a reputation as a general already. Hence Fabius, who by this time had himself been selected to oppose the invader, “deemed it best not to meet in the field a general whose army had been tried in many encounters, and whose object was a battle,” and instead attempted to “let the force and vigour of Hannibal waste away and expire, like a flame, for want of fuel,” as Plutarch put the point. Instead of attempting to meet Hannibal in a single battle, where the African might out-general him, Fabius attempted to wear him—an invader far from his home base—down.

For some time things continued like this: Hannibal ranged about Italy, attempting to provoke Fabius into battle, while the Roman followed meekly at a distance; according to his enemies, as if he were Hannibal’s servant. Meanwhile, according to Plutarch, Hannibal himself sought to encourage that idea: burning the countryside around Rome, the Carthaginian made sure to post armed guards around Fabius’ estates in order to suggest that the Roman was in his pay. Eventually, these stratagems had their effect, and after a further series of misadventures, Fabius retired from command—just the event Hannibal awaited.

The man who became commander after Fabius was Varro, and it was he who led the Romans to the small village near the Adriatic coast. What happened near that village more than 2000 years ago might be summed by an image that might familiar to viewers of the television show, Game of Thrones:


On the television show the chaotic mass in the middle is the tiny army of the character Jon Snow, whereas the orderly lines about the perimeter is the much-vaster army of Ramsay Bolton. But in historical reality, the force in the center that is being surrounded by the opposing force was actually the larger of the two—the Roman army. It was the smaller of the two armies, the Carthaginian one, that stood at the periphery. Yet, somehow, the outcome was more or less the same: the mass of soldiers on the outside of that circle destroyed the force of soldiers on the inside, despite there being more of them; a fact that was so surprising that not only is it still remembered, but it was also the subject of not one, but two remarks that are also still remembered today.

The first of these is a remark made just before the battle itself—a remark that came in reply to the comment of one of Hannibal’s lieutenants, an officer named Gisgo, on the disparity in size between the two armies. The intent of Gisgo’s remark was, it would seem, something to the effect of, “you’re sure this is going to work, right?” To which Hannibal replied: “another thing that has escaped your notice, Gisgo, is even more amazing—that although there are so many of them, there is not one among them called Gisgo.” That is to say, Gisgo is a unique individual, and so the numbers do not matter … etc., etc. We can all fill in the arguments from there: the power of the individual, the singular force of human creativity, and so on. In the case of the incident outside Cannae, those platitudes happened to be true—Hannibal really was a kind of tactical genius. But he also happened not to be facing Fabius that day.

Fabius himself was not the sort of person who could sum up his thought in a pithy (and trite) remark, but I think that the germ of his idea was distilled some centuries after the battle by another Roman senator. “Did all the Romans who fell at Cannae”—the ancient name for the village now known as Canne—“have the same horoscope?” asked Marcus Cicero, in a book entitled De Divinatione. The comment is meant as a deflationary pinprick, designed to explode the pretensions of the followers of Hannibal—a point revealed by a subsequent sentence: “Was there ever a day when countless numbers were not born?” The comment’s point, in other words, is much the same Cicero made in another of his works, when he tells a story about the atheistic philosopher Diagoras. Reproaching his atheism, a worshipper directed Diagoras to the many painted tablets in praise of the gods at the local temple—tablets produced by storm survivors who had taken a vow to have such a tablet painted while enveloped by the sea’s power. Diagoras replied, according to Cicero, that is merely so “because there are no pictures anywhere of those who have been shipwrecked.” In other words: check your premises, sportsfans: what you think may be the result of “creativity,” or some other malarky, may simply be due to the actions of chance—in the case of Hannibal, the fact that he happened not to be fighting Fabius.

Or, more specifically, to a statistical concept called the Law of Large Numbers. First explicitly described by the mathematician Jacob Bernoulli in 1713, this is the law that holds—in Bernoulli’s words—that “it is not enough to take one or another observation for […] reasoning about an event, but that a large number of them are needed.” In a crude way, this law is what critics of Grant refer to when they accuse him of being a “butcher”: that he simply applied the larger numbers of men and material available to the Union side to the war effort. It’s also what the enemies of the man who ought to have been on the field at Cannae—but wasn’t—said about him also: that Fabius fought what military strategists call a “war of attrition” rather than a “war of maneuver.” At that time, and since, many turn their nose up at such methods: in ancient times, they were thought to be ignoble, unworthy—which was why Varro insisted on rejecting what he might have called an “old man strategy” and went on the attack that August day. Yet, they were precisely the means by which, two millennia apart, two very similar men saved their countries from very similar threats.

Today, of course, very many people on the American “Left” say that what they call “scientific” and “mathematical” thought is the enemy. On the steps of the University of California’s Sproul Hall, more than fifty years ago, the Free Speech Movement’s Mario Savio denounced “the operation of the machine”; some years prior to that German Marxist Theodore Adorno and his co-worker Max Horkheimer had condemned the spread of such thought as, more or less, the pre-condition necessary for the Holocaust: “To the Enlightenment,” the two sociologists wrote, “that which does not reduce to numbers, and ultimately to the one, becomes illusion.” According to Bruce Robbins of Columbia University, “the critique of Enlightenment rationality is what English departments were founded on,” while it’s also been observed that, since the 1960s, “language, symbolism, text, and meaning came to be seen as the theoretical foundation for the humanities.” But as I have attempted to show, the notions conceived of by these writers as belonging to a particular part of the Eurasian landmass at a particular moment of history may not be so particular after all.

Leaving those large-scale considerations aside, however, returns us to the discussion concerning promotions in the U.S. military—where the assertions of the three career officers apparently cannot be allowed to go unchallenged. A reply to the three career officers’ article from a Parameters editorial board member, predictably enough, takes them to task for not recognizing that “there are multiple kinds of intelligence,” and instead suggesting that there is “only one particular type of intelligence”—you know, just the same smear used by Adorno and Horkheimer. The author of that article, Anna Simons (a professor at the U.S. Naval Postgraduate School), further intimates that the three officers do not possess “a healthy respect for variation”—i.e., “diversity.” Which, finally, brings us to the point of all this: what is really happening within the military is that, in order to promote what is called “diversity,” standards have to be amended in such a fashion as not only to include women and minorities, but also dumb people.

In other words, the social cost of what is known as “inclusiveness” is simultaneously a general “dumbing-down” of the military: promoting women and minorities also means rewarding not-intelligent people—and, because statistically speaking there simply are more dumb people than not, that also means suppressing smart people who are like Grant, or Fabius. It never appears to occur to anyone that, more or less, talking about “variation” and the like is what the enemies of Grant—or, further back, the enemies of Fabius—said also. But, one supposes, that’s just how it goes in the United States today: neither Grant nor Fabius were called to service until their countrymen had been scared pretty badly. It may be, in other words, that the American military will continue to suppress people with high cognitive abilities within their ranks—apparently, 9/11 and its consequences were not enough like the battle fought near the tiny Italian village to change American views on these matters. Statistically speaking, after all, 9/11 only killed 0.001% of the U.S. population, whereas Cannae killed perhaps a third of the members of the Roman Senate. That, in turn, raises the central question: If 9/11 was not enough to convince Americans that something isn’t right, well—

What will?


Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.


“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Instruments of Darkness


And oftentimes, to win us to our harm,
The instruments of darkness tell us truths …
—William Shakespeare
    The Tragedy of MacBeth
Act I, scene 3 132-3 (1606) 


This year’s Masters demonstrated, once again, the truism that nobody watches golf without Tiger Woods: last year’s Masters, played without Tiger, had the lowest ratings since 1957, while the ratings for this year’s Saturday’s round (featuring a charging Woods), were up nearly half again as much. So much is unsurprising; what was surprising, perhaps, was the reappearance of a journalistic fixture from the days of Tiger’s past: the “pre-Masters Tiger hype story.” It’s a reoccurance that suggests Tiger may be taking cues from another ratings monster: the television series Game of Thrones. But if so—with a nod to Ramsey Snow’s famous line in the show—it suggests that Tiger himself doesn’t think his tale will have a happy ending.

The prototype of the “pre-Masters” story was produced in 1997, the year of Tiger’s first Masters win: before that “win for the ages,” it was widely reported how the young phenom had shot a 59 during a practice round at Isleworth Country Club. At the time the story seemed innocuous, but in retrospect there are reasons to interrogate it more deeply—not to say it didn’t happen, exactly, but to question whether it was released as part of a larger design. After all, Tiger’s father Earl—still alive then—would have known just what to do with the story.

Earl, as all golf fans know, created and disseminated the myth of the invincible Tiger to anyone who would listen in the late 1990s: “Tiger will do more than any other man in history to change the course of humanity,” Gary Smith quoted him saying in the Sports Illustrated story (“The Chosen One”) that, more than any other, sold the Gospel of Woods. There is plenty of reason to suspect that the senior Woods deliberately created this myth as part of a larger campaign: because Earl, as a former member of the U.S. Army’s Green Berets, knew the importance of psychological warfare.

“As a Green Beret,” writes John Lamothe in an academic essay on both Woods, elder and junior, Earl “would have known the effect … psychological warfare could have on both the soldier and the enemy.” As Tiger himself said in a 1996 interview for Orange Coast magazine—before the golfer put up a barrier between himself and the press—“Green Berets know a lot about psychological torture and things like that.” Earl for his part remarked that, while raising Tiger, he “pulled every dirty, nasty trick I could remember from psychological warfare I learned as a Green Beret.” Both Woods described this training as a matter of rattling keys or ripping Velcro at inopportune moments—but it’s difficult not to wonder whether it went deeper.

At the moment of their origin in 1952 after all, the Green Berets, or Special Forces, were a subsection of the Psychological Warfare Staff at the Pentagon: psychological warfare, in other words, was part of their founding mission. And as Lamothe observes, part of the goal of psychological warfare is to create “confidence” in your allies “and doubt in the competitors.” As early as 2000, the sports columnist Thomas Boswell was describing how Tiger “tries to imprint on the mind of every opponent that resistance is useless,” a tactic that Boswell claimed the “military calls … ‘overwhelming force’”—and a tactic that is far older than the game of golf. Consider, for instance, a story from golf’s homeland of Scotland: the tale of the “Douglas Larder.”

It happened at a time of year not unfamiliar to viewers of the Masters: Palm Sunday, in April of 1308. The story goes that Sir James Douglas—an ally of Robert the Bruce, who was in rebellion against the English king Edward I—returned that day to his family’s home, Douglas Castle, which had been seized by the English. Taking advantage of the holiday, Douglas and his men—essentially, a band of guerrillas—slaughtered the English garrison within the church they worshipped in, then beheaded them, ate the Easter feast the Englishmen had no more use for, and subsequently poisoned the castle’s wells and destroyed its supplies (the “Larder” part of the story’s title). Lastly, Douglas set the English soldiers’ bodies afire.

To viewers of the television series Game of Thrones, or readers of the series of books it is based upon (A Song of Ice and Fire), the story might sound vaguely familiar: the “Douglas Larder” is, as popular historian William Rosen has pointed out, one source of the event known from the television series as the “Red Wedding.” Although the television event also borrows from the medieval Scot “Black Dinner” (which is perhaps closer in terms of the setting), and the later incident known as the Massacre at Glencoe, still the “Red Wedding” reproduces the most salient details of the “Douglas Larder.” In both, the attackers take advantage of their prey’s reliance on piety; in both, the bodies of the dead are mutilated in order to increase the monstrous effect.

To a modern reader, such a story is simply a record of barbarism—forgetting that medieval people were, though far less educated, equally as intelligent as nearly anyone alive today. Douglas’ actions were not meant for horror’s sake, but to send a message: the raid on the castle “was meant to leave a lasting impression … not least upon the men who came to replace their dead colleagues.” Acts like his attack on his own castle demonstrate how the “Black Douglas”—“mair fell than wes ony devill in hell” according to a contemporary account—was “an early practitioner of psychological warfare”: he knew how “fear alone could do much of the work of a successful commander.” It seems hardly credible to think Earl Woods—a man who’d been in combat in the guerrilla war of Vietnam—did not know the same lesson. Nor is it credible to think that Earl didn’t tell Tiger about it.

Certainly, Tiger himself has been a kind of Douglas: he won his first Masters by 12 shots, and in the annus mirabilis of 2000 he won the U.S. Open at Pebble Beach by 15. Displays like that, many have thought, functioned similarly, if less macabrely, as Douglas’ attacks. The effect has even been documented academically: in 2008’s “Dominance, Intimidation, and ‘Choking’ on the PGA Tour,” professors Robert Connolly and Richard Rendleman found that being paired with Tiger cost other tour pros nearly half a shot per round from 1998 to 2001. The “intimidation factor,” that is, has been quantified—so it seems jejune at best to think somebody connected to Tiger, even if he had not been aware of the effect in the past, would not have called his attention to the research.

Releasing a story prior to the Masters, then, can easily be seen as part of an attempt to revive Tiger’s heyday. But what’s interesting about this particular story is its difference from the 1997 version: then, Tiger just threw out a raw score; now, it’s being dressed in a peculiarly complicated costume. As retailed by Golf Digest’s Tim Rosaforte, the story goes like this: on the Tuesday before the tournament Tiger had “recently shot a worst-ball 66 at his home course, Medalist Golf Club.” In Golf Digest, Alex Meyers in turn explained that “a worst-ball 66 … is not to be confused with a best-ball 66 or even a normal 66 for that matter,” because what “worst-ball” means is that “Woods played two balls on each hole, but only played the worst shot each time.” Why not just say, as in 1997, Tiger shot some ridiculously low number?

The answer, I think, can be understood by way of the “Red Wedding”: just as George Martin, in order to write the A Song of Ice and Fire books, has revisited and revised many episodes of medieval history, so too is Tiger attempting to revisit his own past—a conclusion that would be glib were it not for the very make-up of this year’s version of the pre-Masters story itself. After all, to play a “worst-ball” is to time-travel: it is, in effect, to revise—or rewrite—the past. Not only that, but—and in this it is very much like both Scottish history and Game of Thrones—it is also to guarantee a “downer ending.” Maybe Tiger, then, is suggesting to his fans that they ought to pay more attention.