Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Instruments of Darkness

 

And oftentimes, to win us to our harm,
The instruments of darkness tell us truths …
—William Shakespeare
    The Tragedy of MacBeth
Act I, scene 3 132-3 (1606) 

 

This year’s Masters demonstrated, once again, the truism that nobody watches golf without Tiger Woods: last year’s Masters, played without Tiger, had the lowest ratings since 1957, while the ratings for this year’s Saturday’s round (featuring a charging Woods), were up nearly half again as much. So much is unsurprising; what was surprising, perhaps, was the reappearance of a journalistic fixture from the days of Tiger’s past: the “pre-Masters Tiger hype story.” It’s a reoccurance that suggests Tiger may be taking cues from another ratings monster: the television series Game of Thrones. But if so—with a nod to Ramsey Snow’s famous line in the show—it suggests that Tiger himself doesn’t think his tale will have a happy ending.

The prototype of the “pre-Masters” story was produced in 1997, the year of Tiger’s first Masters win: before that “win for the ages,” it was widely reported how the young phenom had shot a 59 during a practice round at Isleworth Country Club. At the time the story seemed innocuous, but in retrospect there are reasons to interrogate it more deeply—not to say it didn’t happen, exactly, but to question whether it was released as part of a larger design. After all, Tiger’s father Earl—still alive then—would have known just what to do with the story.

Earl, as all golf fans know, created and disseminated the myth of the invincible Tiger to anyone who would listen in the late 1990s: “Tiger will do more than any other man in history to change the course of humanity,” Gary Smith quoted him saying in the Sports Illustrated story (“The Chosen One”) that, more than any other, sold the Gospel of Woods. There is plenty of reason to suspect that the senior Woods deliberately created this myth as part of a larger campaign: because Earl, as a former member of the U.S. Army’s Green Berets, knew the importance of psychological warfare.

“As a Green Beret,” writes John Lamothe in an academic essay on both Woods, elder and junior, Earl “would have known the effect … psychological warfare could have on both the soldier and the enemy.” As Tiger himself said in a 1996 interview for Orange Coast magazine—before the golfer put up a barrier between himself and the press—“Green Berets know a lot about psychological torture and things like that.” Earl for his part remarked that, while raising Tiger, he “pulled every dirty, nasty trick I could remember from psychological warfare I learned as a Green Beret.” Both Woods described this training as a matter of rattling keys or ripping Velcro at inopportune moments—but it’s difficult not to wonder whether it went deeper.

At the moment of their origin in 1952 after all, the Green Berets, or Special Forces, were a subsection of the Psychological Warfare Staff at the Pentagon: psychological warfare, in other words, was part of their founding mission. And as Lamothe observes, part of the goal of psychological warfare is to create “confidence” in your allies “and doubt in the competitors.” As early as 2000, the sports columnist Thomas Boswell was describing how Tiger “tries to imprint on the mind of every opponent that resistance is useless,” a tactic that Boswell claimed the “military calls … ‘overwhelming force’”—and a tactic that is far older than the game of golf. Consider, for instance, a story from golf’s homeland of Scotland: the tale of the “Douglas Larder.”

It happened at a time of year not unfamiliar to viewers of the Masters: Palm Sunday, in April of 1308. The story goes that Sir James Douglas—an ally of Robert the Bruce, who was in rebellion against the English king Edward I—returned that day to his family’s home, Douglas Castle, which had been seized by the English. Taking advantage of the holiday, Douglas and his men—essentially, a band of guerrillas—slaughtered the English garrison within the church they worshipped in, then beheaded them, ate the Easter feast the Englishmen had no more use for, and subsequently poisoned the castle’s wells and destroyed its supplies (the “Larder” part of the story’s title). Lastly, Douglas set the English soldiers’ bodies afire.

To viewers of the television series Game of Thrones, or readers of the series of books it is based upon (A Song of Ice and Fire), the story might sound vaguely familiar: the “Douglas Larder” is, as popular historian William Rosen has pointed out, one source of the event known from the television series as the “Red Wedding.” Although the television event also borrows from the medieval Scot “Black Dinner” (which is perhaps closer in terms of the setting), and the later incident known as the Massacre at Glencoe, still the “Red Wedding” reproduces the most salient details of the “Douglas Larder.” In both, the attackers take advantage of their prey’s reliance on piety; in both, the bodies of the dead are mutilated in order to increase the monstrous effect.

To a modern reader, such a story is simply a record of barbarism—forgetting that medieval people were, though far less educated, equally as intelligent as nearly anyone alive today. Douglas’ actions were not meant for horror’s sake, but to send a message: the raid on the castle “was meant to leave a lasting impression … not least upon the men who came to replace their dead colleagues.” Acts like his attack on his own castle demonstrate how the “Black Douglas”—“mair fell than wes ony devill in hell” according to a contemporary account—was “an early practitioner of psychological warfare”: he knew how “fear alone could do much of the work of a successful commander.” It seems hardly credible to think Earl Woods—a man who’d been in combat in the guerrilla war of Vietnam—did not know the same lesson. Nor is it credible to think that Earl didn’t tell Tiger about it.

Certainly, Tiger himself has been a kind of Douglas: he won his first Masters by 12 shots, and in the annus mirabilis of 2000 he won the U.S. Open at Pebble Beach by 15. Displays like that, many have thought, functioned similarly, if less macabrely, as Douglas’ attacks. The effect has even been documented academically: in 2008’s “Dominance, Intimidation, and ‘Choking’ on the PGA Tour,” professors Robert Connolly and Richard Rendleman found that being paired with Tiger cost other tour pros nearly half a shot per round from 1998 to 2001. The “intimidation factor,” that is, has been quantified—so it seems jejune at best to think somebody connected to Tiger, even if he had not been aware of the effect in the past, would not have called his attention to the research.

Releasing a story prior to the Masters, then, can easily be seen as part of an attempt to revive Tiger’s heyday. But what’s interesting about this particular story is its difference from the 1997 version: then, Tiger just threw out a raw score; now, it’s being dressed in a peculiarly complicated costume. As retailed by Golf Digest’s Tim Rosaforte, the story goes like this: on the Tuesday before the tournament Tiger had “recently shot a worst-ball 66 at his home course, Medalist Golf Club.” In Golf Digest, Alex Meyers in turn explained that “a worst-ball 66 … is not to be confused with a best-ball 66 or even a normal 66 for that matter,” because what “worst-ball” means is that “Woods played two balls on each hole, but only played the worst shot each time.” Why not just say, as in 1997, Tiger shot some ridiculously low number?

The answer, I think, can be understood by way of the “Red Wedding”: just as George Martin, in order to write the A Song of Ice and Fire books, has revisited and revised many episodes of medieval history, so too is Tiger attempting to revisit his own past—a conclusion that would be glib were it not for the very make-up of this year’s version of the pre-Masters story itself. After all, to play a “worst-ball” is to time-travel: it is, in effect, to revise—or rewrite—the past. Not only that, but—and in this it is very much like both Scottish history and Game of Thrones—it is also to guarantee a “downer ending.” Maybe Tiger, then, is suggesting to his fans that they ought to pay more attention.