Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

Advertisements

Noble Lie

With a crew and good captain well seasoned,
They left fully loaded for Cleveland.
—“The Wreck of the Edmund Fitzgerald.” 1976.

The comedian Bill Maher began the “panel” part of his show Real Time the other day—the last episode before the election—by noting that virtually every political expert had dismissed Donald Trump’s candidacy at every stage of the past year’s campaign. When Trump announced he was running, Maher observed, the pundits said “oh, he’s just saying that … because he just wants to promote his brand.” They said Trump wouldn’t win any voters, Maher noted—“then he won votes.” And then, Maher went on, they said he wouldn’t win any primaries—“then he won primaries.” And so on, until Trump became the Republican nominee. So much we know, but what was of interest about the show was the response one of Maher’s guests: David Frum, a Canadian who despite his immigrant origins became a speechwriter for George W. Bush, invented the phrase “axis of evil,” and has since joined the staff of the supposedly liberal magazine, The Atlantic. The interest of Frum’s response was not only how marvelously inane it was—but also how it had already been decisively refuted only hours earlier, by men playing a boy’s game on the Lake Erie shore.

Maybe I’m being cruel however: like most television shows, Real Time with Bill Maher is shot before it is aired, and this episode was released last Friday. Frum then may not have been aware, when he said what he said, that the Chicago Cubs won the World Series on Wednesday—and if he is like most people, Frum is furthermore unaware of the significance of that event, which goes (as I will demonstrate) far beyond matters baseball. Still, surely Frum must have been aware of how ridiculous what he said was, given that the conversation began with Maher reciting the failures of the pundit class—and Frum admitted to belonging to that class. “I was one of those pundits that you made fun of,” Frum confessed to Maher—yet despite that admission, Frum went on to make a breathtakingly pro-pundit argument.

Trump’s candidacy, Frum said, demonstrated the importance of the gatekeepers of the public interest—the editors of the national newspapers, for instance, or the anchors of the network news shows, or the mandarins of the political parties. Retailing a similar  argument to one made by, among others, Salon’s Bob Cesca—who contended in early October that “social media is the trough from which Trump feeds”—Frum proceeded to make the case that the Trump phenomena was only possible once apps like Facebook and Twitter enabled presidential candidates to bypass the traditional centers of power. To Frum, in other words, the proper response to the complete failure of the establishment (to defeat Trump) was to prop up the establishment (so as to defeat future Trumps). To protect against the failure of experts Frum earnestly argued—with no apparent sense of irony—that we ought to give more power to experts.

There is, I admit, a certain schadenfreude in witnessing a veteran of the Bush Administration tout the importance of experts, given that George W.’s regime was notable for, among other things, “systematically chang[ing] and supress[ing] … scientific reports about global warming” (according to the British Broadcasting Corporation)—and not even to discuss how Bush cadres torpedoed the advice of the professionals of the CIA vis á vis the weapons-buying habits of a certain Middle Eastern tyrant. But the larger issue, however, is that the very importance of “expert” knowledge has been undergoing a deep interrogation for decades now—and that the victory of the Chicago Cubs in this year’s World Series has brought much of that critique to the mainstream.

What I mean can be demonstrated by a story told by the physicist Freeman Dyson—a man who never won a Nobel Prize, nor even received a doctorate, but nevertheless was awarded a place at Princeton’s Institute of Advanced Study at the ripe age of thirty by none other than Robert Oppenheimer (the man in charge of the Manhattan Project) himself. Although Dyson has had a lot to say during his long life—and a lot worth listening to—on a wide range of subjects, from interstellar travel to Chinese domestic politics, of interest to me in connection to Frum’s remarks on Donald Trump is an article Dyson published in The New York Review of Books in 2011, about a man who did win the Nobel Prize: the Israeli psychologist Daniel Kahneman, who won the prize for economics in 2002. In that article, Dyson told a story about himself: specifically, what he did during World War II—an experience, it turns out, that leads by a circuitous path over the course of seven decades to the epic clash resolved by the shores of Lake Erie in the wee hours of 3 November.

Entitled “How to Dispel Your Illusions,” Dyson there tells the story of being a young statistician with the Royal Air Force’s Bomber Command in the spring of 1944—a force that suffered, according to the United Kingdom’s Bomber Command Museum, “a loss rate comparable only to the worst slaughter of the First World War trenches.” To combat this horror, Dyson was charged with discovering the common denominator between the bomber crews that survived until the end of their thirty-mission tour of duty (about 25% of all air crews). Since they were succeeding when three out of four of their comrades were failing, Dyson’s superiors assumed that those successful crews were doing something that their less-successful colleagues (who were mostly so much less successful that they were no longer among the living) were not.

Bomber Command, that is, had a theory about why some survived and some died: “As [an air crew] became more skillful and more closely bonded,” Dyson writes that everyone at Bomber Command thought, “their chances of survival would improve.” So Dyson, in order to discover what that something was, plunged in among the data of all the bombing missions the United Kingdom had run over Germany since the beginning of the war. If he could find it, maybe it could be taught to the others—and the war brought that much closer to an end. But despite all his searching, Dyson never found that magic ingredient.

It wasn’t that Dyson didn’t look hard enough for it: according to Dyson, he “did a careful analysis of the correlation between the experience of the crews and their loss rates, subdividing the data into many small packages so as to eliminate effects of weather and geography.” Yet, no matter how many different ways he looked at the data, he could not find evidence that the air crews that survived were any different than the ones shot down over Berlin or lost in the North Sea: “There was no effect of experience,” Dyson’s work found, “on loss rate.” Who lived and who died while attempting to burn Dresden or blow up Hamburg was not a matter of experience: “whether a crew lived or died,” Dyson writes, “was purely a matter of chance.” The surviving crews possessed no magical ingredient. They couldn’t—perhaps because there wasn’t one.

Still, despite the conclusiveness of Dyson’s results his studies had no effect on the operations of Bomber Command: “The crews continued to die, experienced and inexperienced alike, until Germany was overrun and the war finally ended.” While Dyson’s research suggested that dying in the stratosphere over Lübeck had no relation to skill, no one at the highest levels wanted to admit that the survivors weren’t experts—that they were instead just lucky. Perhaps, had the war continued, Dyson’s argument might eventually have won out—but the war ended, fortunately (or not) for the air crews of the Royal Air Force, before Bomber Command had to admit he was right.

All of that, of course, might appear to have little to do with the Chicago Cubs—until it’s recognized that the end of their century-long championship drought had everything to do with the eventual success of Dyson’s argument. Unlike Bomber Command, the Cubs have been at the forefront of what The Ringer’s Rany Jazayerli calls baseball’s “Great Analytics War”—and unlike the contest between Dyson and his superiors, that war has had a definite conclusion. The battle between what Jazayerli calls an “objective, data-driven view” and an older vision of baseball “ended at 48 minutes after midnight on November 3”—when the Cubs (led by a general manager who, like Dyson, trusted to statistical analysis) recorded the final out of the 2016 season.

That general manager is Theo Epstein—a man who was converted to Dyson’s “faith” at an early age. According to ESPN, Epstein, “when he was 12 … got his first Bill James historical abstract”—and as many now recognize, James pioneered applying the same basic approach Dyson used to think about how to bomb Frankfurt to winning baseball games. An obscure graduate of the University of Kansas, after graduation James took a job as a night security guard at the Stokely-Van Camp pork and beans cannery in Kansas City—and while isolated in what one imagines were the sultry (or wintry) Kansas City evenings of the 1970s, James had plenty of time to think about what interested him. That turned out to be somewhat like the problem Dyson had faced a generation earlier: where Dyson was concerned with how to win World War II, James was interested in what appeared to be the much-less portentous question of how to win the American League. James thereby invented an entire field—what’s now known as sabermetrics, or the statistical study of baseball—and in so doing, the tools James invented have become the keys to baseball’s kingdom. After all, Epstein—employed by a team owner who hired James as a consultant in 2003—not only used James’ work to end the Cubs’ errand in baseball’s wilderness but also, as all the world knows, constructed the Boston Red Sox championship teams of 2004 and 2007.

What James had done, of course, is shown how the supposed baseball “experts”—the ex-players and cronies that dominated front offices at the time—in fact knew very little about the game: they did not know, for example, that the most valuable single thing a batter can do is to get on base, or that stolen bases are, for the most part, a waste of time. (The risk of making an out, as per for example David Smith’s “Maury Wills and the Value of a Stolen Base,” is more significant than the benefit of gaining a base.) James’ insights had not merely furnished the weaponry used by Epstein; during the early 2000s another baseball team, the Oakland A’s, and their manager Billy Beane, had used James-inspired work to get to the playoffs four consecutive years (from 2000 to 2003), and won twenty consecutive games in 2002—a run famously chronicled by journalist Michael Lewis’ book, Moneyball: The Art of Winning an Unfair Game, which later became a Hollywood movie starring Brad Pitt. What isn’t much known, however, is that Lewis has noticed the intellectual connection between this work in the sport of baseball—and the work Dyson thought of as similar to his own work as a statistician for Bomber Command: the work of psychologist Kahneman and his now-deceased colleague, Amos Tversky.

The connection between James, Kahneman, and Tversky—an excellent name for a law firm—was first noticed, Lewis says, in a review of his Moneyball book by University of Chicago professors Cass Sunstein, of the law school, and Richard Thaler, an economist. When Lewis described the failures of the “old baseball men,” and conversely Beane’s success, the two professors observed that “Lewis is actually speaking here of a central finding in cognitive psychology”: the finding upon which Kahneman and Tversky based their careers. Whereas Billy Beane’s enemies on other baseball teams tended “to rely on simple rules of thumb, on traditions, on habits, on what other experts seem to believe,” Sunstein and Thaler pointed out that Beane relied on the same principle that Dyson found when examining the relative success of bomber pilots: “Statistics and simple arithmetic tell us more about ourselves than expert intuition.” While Bomber Command in other words relied on the word of their “expert” pilots, who perhaps might have said they survived a run over a ball-bearing plant because of some maneuver or other, baseball front offices relied for decades on ex-players who thought they had won some long-ago game on the basis of some clever piece of baserunning. Tversky and Kahneman’s work, however—like that of Beane and Dyson—suggested that much of what passes as “expert” judgment can be, for decades if not centuries, an edifice erected on sand.

That work has, as Lewis found after investigating the point when his attention was drawn to it by Sunstein and Thaler’s article, been replicated in several fields: in the work of the physician Atul Gawande, for instance, who, Lewis says, “has shown the dangers of doctors who place too much faith in their intuition.” The University of California, Berkeley finance professor Terry Odean “examined 10,000 individual brokerage accounts to see if stocks the brokers bought outperformed stocks they sold and found that the reverse was true.” And another doctor, Toronto’s Donald Redelmeier—who studied under Tversky—found “that an applicant was less likely to be admitted to medical school if he was interviewed on a rainy day.” In all of these cases (and this is not even to bring up the subject of, say, the financial crisis of 2007-08, a crisis arguably brought on precisely by the advice of “experts”), investigation has shown that “expert” opinion may not be what it is cracked up to be. It may in fact actually be worse than the judgment of laypeople.

If so, might I suggest, then David Frum’s “expert” suggestion about what to do to avoid a replay of the Trump candidacy—reinforce the rule of experts, a proposition that itself makes several questionable assumptions about the nature of the events of the past two years, if not decades—stops appearing to be a reasonable proposition. It begins, in fact, to appear rather more sinister: an attempt by those in Frum’s position in life—what we might call Eastern, Ivy League-types—to will themselves into believing that Trump’s candidacy is fueled by a redneck resistance to “reason,” along with good old-fashioned American racism and sexism. But what the Cubs’ victory might suggest is that what could actually be powering Trump is the recognition by the American people that many of the “cures” dispensed by the American political class are nothing more than snake oil proffered by cynical tools like David Frum. That snake oil doubles down on exactly the same “expert” policies (like freeing capital to wander the world, while increasingly shackling labor) that, debatably, is what led to the rise of Trump in the first place—a message that, presumably, must be welcome to Frum’s superiors at whatever the contemporary equivalent of Bomber Command is.

Still, despite the fact that the David Frums of the world continue to peddle their nonsense in polite society, even this descendant of South Side White Sox fans must allow that Theo Epstein’s victory has given cause for hope down here at the street-level of a Midwestern city that for has, for more years than the Cubs have been in existence, been the plaything of Eastern-elite labor and trade policies. It’s a hope that, it seems, now has a Ground Zero.

You can see it at the intersection of Clark and Addison.

Our Game

truck with battle flag and bumper stickers
Pick-up truck with Confederate battle flag.

 

[Baseball] is our game: the American game … [it] belongs as much to our institutions, fits into them as significantly, as our constitutions, laws: is just as important in the sum total of our historic life.
—Walt Whitman. April, 1889.

The 2015 Chicago Cubs are now a memory, yet while they lived nearly all of Chicago was enthralled—not least because of the supposed prophesy of a movie starring a noted Canadian. For this White Sox fan, the enterprise reeked of the phony nostalgia baseball has become enveloped by, of the sort sportswriters like to invoke whenever they, for instance, quote Walt Whitman’s remark that baseball “is our game: the American game.” Yet even while, to their fans, this year’s Cubs were a time machine to what many envisioned as a simpler, and perhaps better, America—much as the truck pictured may be such a kind of DeLorean to its driver—in point of fact the team’s success was built upon precisely the kind of hatred of tradition that was the reason why Whitman thought baseball was “America’s game”: baseball, Whitman said, had “the snap, go, fling of the American character.” It’s for that reason, perhaps, that the 2015 Chicago Cubs may yet prove a watershed edition of the Lovable Losers: they might prove not only the return of the Cubs to the elite of the National League, but also the resurgence of a type of thinking that was of the vanguard in Whitman’s time and—like World Series appearances for the North Siders—of rare vintage since. It’s a resurgence that may, in a year of Donald Trump, prove far more important than the victories of baseball teams, no matter how lovable.

That, to say the least, is an ambitious thesis: the rise of the Cubs signifies little but that their new owners possess a lot of money, some might reply. But the Cubs’ return to importance was undoubtedly caused by the team’s adherence, led by former Boston general manager Theo Epstein, to the principles of what’s been called the “analytical revolution.” It’s a distinction that was made clear during the divisional series against the hated St. Louis Cardinals: whereas, for example, St. Louis manager Matt Matheny asserted, regarding how baseball managers ought to handle their pitching staff,  that managers “first and foremost have to trust our gut,” the Cubs’ Joe Maddon (as I wrote about in a previous post) spent his entire season doing such things as batting his pitcher eighth, on the grounds that statistical analysis showed that by doing so his team gained a nearly-infinitesimal edge. (Cf. “Why Joe Maddon bats the pitcher eighth” ESPN.com)

Since the Cubs hired former Boston Red Sox general manager Theo Epstein, few franchises in baseball have been as devoted to what is known as the “sabermetric” approach. When the Cubs hired him, Epstein was well-known for “using statistical evidence”—as the New Yorker’s Ben McGrath put it a year before Epstein’s previous team, the Boston Red Sox, overcame their own near-century of futility in 2004—rather than relying upon what Epstein’s hero, the storied Bill James, has called “baseball’s Kilimanjaro of repeated legend and legerdemain”—the sort embodied by the Cardinals’ Matheny apparent reliance on seat-of-the-pants judgement.

Yet, while Bill James’ sort of thinking may be astonishingly new to baseball’s old guard, it would have been old hat to Whitman, who had the example of another Bill James directly in front of him. To follow the sabermetric approach after all requires believing (as the American philosopher William James did according to the Internet Encyclopedia of Philosophy), “that every event is caused and that the world as a whole is rationally intelligible”—an approach that not only would Whitman have understood, but applauded.

Such at least was the argument of the late American philosopher Richard Rorty, whose lifework was devoted to preserving the legacy of late nineteenth and early twentieth century writers like Whitman and James. To Rorty, both of those earlier men subscribed to a kind of belief in America rarely seen today: both implicitly believed in what James’ follower John Dewey would call “the philosophy of democracy,” in which “both pragmatism and America are expressions of a hopeful, melioristic, experimental frame of mind.” It’s in that sense, Rorty argued, that William James’ famous assertion that “the true is only the expedient in our way of thinking” ought to be understood: what James meant by lines like this was that what we call “truth” ought to be tested against reality in the same way that scientists test their ideas about the world via experiments instead of relying upon “guts.”

Such a frame of mind however has been out of fashion in academia since at least the 1940s, Rorty often noted: for example, as early as the 1940s Robert Hutchins and Mortimer Adler of the University of Chicago were reviling the philosophy of Dewey and James as “vulgar, ‘relativistic,’ and self-refuting.” To say, as James did say, “that truth is what works” was—according to thinkers like Hutchins and Adler—“to reduce the quest for truth to the quest for power.” To put it another way, Hutchins and Adler provided the Ur Example of what’s become known as Godwin’s Law: the idea that, sooner or later, every debater will eventually claim that the opponent’s position logically ends up at Nazism.

Such thinking is by no means extinct in academia: indeed, in many ways Rorty’s work at the end of his life was involved in demonstrating how the sorts of arguments Hutchins and Adler enlisted for their conservative politics had become the very lifeblood of those supposedly opposed to the conservative position. That’s why, to those whom Rorty called the “Unpatriotic Academy,” the above picture—taken at a gas station just over the Ohio River in southern Indiana—will be confirmation of the view of the United States held by those who “find pride in American citizenship impossible,” and “associate American patriotism with an endorsement of atrocities”: to such people, America and science are more or less the same thing as the kind of nearly-explicit racism demonstrated in the photograph of the truck.

The problem with those sorts of arguments, Rorty wanted to claim in return, was that it is all-too willing to take the views of some conservative Americans at face value: the view that, for instance, “America is a Christian country.” That sentence is remarkable precisely because it is not taken from the rantings of some Southern fundamentalist preacher or Republican candidate, but rather is the opening sentence of an article by the novelist and essayist Marilynne Robinson in, of all places, the New York Review of Books. That it could appear so, I think Rorty would have said, shows just how much today’s academia really shares the views of its supposed opponents.

Yet, as Rorty was always arguing, the ideas held by the pragmatists are not so easily characterized as mere American jingoism as the many critics of Dewey and James and the rest would like to portray them as—nor is “America” so easily conflated with simple racism. That is because the arguments of the American pragmatists were (arguably) simply a restatement of a set of ideas held by a man who lived long before North America was even added to the world’s geography: a man known to history as Ibn Khaldun, who was born in Tunis on Africa’s Mediterranean coastline in the year 1332 of the Western calendar.

Khaldun’s views of history, as set out by his book Muqaddimah (“Introduction,” often known by its Greek title, Prolegemena), can be seen as the forerunners of the ideas of John Dewey and William James, as well as the ideas of Bill James and the front office of the Chicago Cubs. According to a short one-page biography of the Arab thinker by one “Dr. A. Zahoor,” for example, Khaldun believed that writing history required such things as “relating events to each other through cause and effect”—much as both men named William James believe[d] that baseball events are not inexplicable. As Khaldun himself wrote:

The rule for distinguishing what is true from what is false in history is based on its possibility or impossibility: That is to say, we must examine human society and discriminate between the characteristics which are essential and inherent in its nature and those which are accidental and need not be taken into account, recognizing further those which cannot possibly belong to it. If we do this, we have a rule for separating historical truth from error by means of demonstrative methods that admits of no doubt.

This statement is, I think, hardly distinguishable from what the pragmatists or the sabermetricians are after: the discovery of what Khaldun calls “those phenomena [that] were not the outcome of chance, but were controlled by laws of their own.” In just the same way that Bill James and his followers wish to discover things like when, if ever, it is permissible or even advisable to attempt to steal a base, or lay down a bunt (both, he says, are more often inadvisable strategies, precisely on the grounds that employing them leaves too much to chance), Khaldun wishes to discover ways to identify ideal strategies in a wider realm.

Assuming then that we could say that Dewey and James were right to claim that such ideas ought to be one and the same as the idea of “America,” then we could say that Ibn Khaldun, if not the first, was certainly one of the first Americans—that is, one of the first to believe in those ideas we would later come to call “America.” That Khaldun was entirely ignorant of such places as southern Indiana should, by these lights, no more count against his Americanness than Donald Trump’s ignorance of more than geography ought to count against his. Indeed, conducted according to this scale, it should be no contest as to which—between Donald Trump, Marilynn Robinson, and Ibn Khaldun—is the the more likely to be a baseball fan. Nor, need it be added, which the better American.

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.

The End of Golf?

And found no end, in wandering mazes lost.
Paradise Lost, Book II, 561

What are sports, anyway, at their best, but stories played out in real time?
Grantland “Home Fields” Charles P. Pierce

We were approaching our tee shots down the first fairway at Chechessee Creek Golf Club, where I am wintering this year, when I got asked the question that, I suppose, will only be asked more and more often. As I got closer to the first ball I readied my laser rangefinder—the one that Butler National Golf Club, outside of Chicago, finally required me to get. The question was this: “Why doesn’t the PGA Tour allow rangefinders in competition?” My response was this, and it was nearly immediate: “Because that’s not golf.” That’s an answer that, perhaps, appeared clearer a few weeks ago, before the United States Golf Association announced a change to the Rules of Golf in conjunction with the Royal and Ancient of St. Andrews. It’s still clear, I think—as long as you’ll tolerate a side-trip through both baseball and, for hilarity’s sake, John Milton.

Throughout the rest of this year, any player in a tournament conducted under the Rules of Golf would be subjected to disqualification should she or he take out their cell phone during a round to consult a radar map of incoming weather. But on the coming of the New Year, that will be permitted: as the Irish Times wonders, “Will the sight of a player bending down to pull out a tuft of grass and throwing skywards to find out the direction of the wind be a thing of the past?” Perhaps not, but the new decision certainly says where the wind is blowing in Far Hills. Technology is coming to golf, as, it seems, to everything.

At some point, and it isn’t likely that far away, all relevant information will likely be available to a player in real time: wind direction, elevation, humidity, and, you know, yardage. The question will be, is that still golf? When the technology becomes robust enough, will the game be simply a matter of executing shots, as if all the great courses of the world were simply your local driving range? If so, it’s hard to imagine the game in the same way: to me, at least, part of the satisfaction of playing isn’t just hitting a shot well, it’s hitting the correct shot—not just flushing the ball on the sweet spot, but seeing it fly (or run) up toward the pin. If everyone is hitting the correct club every time, does the game become simply a repetitive exercise to see whose tempo is particularly “on” that day?

Amateur golfers think golf is about hitting shots, professionals know that golf is selecting what shots to hit. One of the great battles of golf, to my mind, is the contest of the excellent ball-striker vs. the canny veteran. Bobby Jones vs. Walter Hagen, to those of you who know your golf history: since Jones was perhaps known for the purity of his hits while Hagen, like Seve Ballesteros, for his ability to recover from his impure ones. Or we can generalize the point and say golf is a contest between ballstriking and craftiness. If that contest goes, does the game go with it?

That thought would go like this: golf is a contest because Bobby Jones’ ability to hit every shot purely is balanced by Walter Hagen’s ability to hit every shot correctly. That is, Jones might hit every shot flush, but he might not hit the right club; while Hagen might not hit every shot flush, but he will hit the correct club, or to the correct side of the green or fairway, or the like. But if Jones can get the perfection of information that will allow him to hit the correct club more often, that might be a fatal advantage—paradoxically ending the game entirely because golf becomes simply an exercise in who has the better reflexes. The idea is similar to the way in which a larger pitching mound became, in the late 1960s, such an advantage for pitchers that hitting went into a tailspin; in 1968 Bob Gibson became close to unhittable, issuing 268 strikeouts and possessing a 1.12 ERA.

As it happens, baseball is (once again) wrestling with questions very like these at the moment. It’s fairly well-known at this point that the major leagues have developed a system called PITCH/fx, which is capable of tracking every pitch thrown in every game throughout the season—yet still, that system can’t replace human umpires. “Even an automated strike zone,” wrote Ben Lindbergh in the online sports magazine Grantland recently, “would have to have a human element.” That’s for two reasons. One is the more-or-less obvious one that, while an automated system has no trouble judging whether a pitch is over the plate or not (“inside” or “outside”) it has no end of trouble judging whether a pitch is “high” or “low.” That’s because the strike zone is judged not only by each batter’s height, but also by batting stance: two players who are the same height can still have different strike zones because one might crouch more than another, for instance.

There is, however, a perhaps-more rooted reason why umpires will likely never be replaced: while it’s true that major league baseball’s PITCH/fx can judge nearly every pitch in every game, every once in (a very great) while the system just flat out doesn’t “see” a pitch. It doesn’t even register that a ball was thrown. So all the people calling for “robot umpires” (it’s a hashtag on Twitter now) are, in the words of Dan Brooks of Brooks Baseball (as reported by Lindbergh), “willing to accept a much smaller amount of inexplicable error in exchange for a larger amount of explicable error.” In other words, while the great majority of pitches would likely be called more accurately, it’s also so that the mistakes made by such a system would be a lot more catastrophic than mistakes made by human umpires. Imagine, say, Zack Greinke was pitching a perfect game—and the umpire just didn’t see a pitch.

These are, however, technical issues regarding mechanical aids, not quite the existential issues of the existence of what we might term a perfectly transparent market. Yet they demonstrate just how difficult such a state would, in practical terms, be to achieve: like arguing whether communism or capitalism are better in their pure state, maybe this is an argument that will never become anything more than a hypothetical for a classroom. The exercise however, like seminar exercises are meant to, illuminates something about the object in question: since a computer doesn’t know the difference between the first pitch of April and the last pitch of the World Series’ last game—and we do—that I think tells us something about what we value about both baseball and golf.

Which is what brings up Milton, since the obvious (ha!) lesson here could be the one that Stanley Fish, the great explicator of John Milton, says is the lesson of Milton’s Paradise Lost: “I know that you rely upon your senses for your apprehension of reality, but they are unreliable and hopelessly limited.” Fish’s point refers to a moment in Book III, when Milton is describing how Satan lands upon the sun:

There lands the Fiend, a spot like which perhaps
Astronomer in the Sun’s lucent Orb
Through his glaz’d optic Tube yet never saw.

Milton compares Satan’s arrival on the sun to the sunspots that Galileo (whom Milton had met) witnessed through his telescope—at least, that is what the first part of the thought appears to imply. The last three words, however—yet never saw—rip away that certainty: the comparison that Milton carefully sets up between Satan’s landing and sunspots he then tells the reader is, actually, nothing like what happened.

The pro-robot crowd might see this as a point in favor of robots, to be sure—why trust the senses of an umpire? But what Fish, and Milton, would say is quite the contrary: Galileo’s telescope “represents the furthest extension of human perception, and that is not enough.” In other words, no matter how far you pursue a technological fix (i.e., robots), you will still end up with more or less the problems you had before, only they might be more troublesome than the ones you have now. And pretty obviously, a system that was entirely flawless for every pitch of the regular season—which encompasses, remember, thousands of games just at the major league level, not even to mention the number of individual pitches thrown—and then just didn’t see a strike three that (would have) ended a Game 7 is not acceptable. That’s not really what I meant by “not golf” though.

What I meant might best be explained by reference to (surprise, heh) Fish’s first major book, the one that made his reputation: Surprised by Sin: The Reader in Paradise Lost. That book set out to hurdle what had seemed to be an unbridgeable divide, one that had existed for nearly two centuries at least: a divide between those who read the poem (Paradise Lost, that is) as being, as Milton asked them, intended to “justify the ways of God to men,” and those who claimed, with William Blake, that Milton was “of the Devil’s party without knowing it.” Fish’s argument was quite ingenious, which was in essence was that Milton’s technique was true to his intention, but that, misunderstood, could easily explain how some could mis-read him so badly. Which is rather broad, to be sure—as in most things, the Devil is in the details.

What Fish argued was that Paradise Lost could be read as one (very) long instance of what are now called “garden path” sentences, which are grammatical sentences that begin in a way that appear to direct the reader toward one interpretation, only to reveal their true meaning at the end. Very often, they require the reader to go back and reread the sentence, such as in the sentence, “Time flies like an arrow; fruit flies like a banana.” Another example is Emo Philips’ line “I like going to the park and watching the children run around because they don’t know I’m using blanks.” They’re sentences, in other words, where the structure implies one interpretation at the beginning, only to have that interpretation snatched away by the sentence’s end.

Fish argued that Paradise Lost was, in fact, full of these moments—and, more significantly, that they were there because Milton put them there. One example Fish uses is just that bit from Book III, where Satan gets compared, in detail, with the latest developments in solar astronomy—until Milton jerks the rug out with the words “yet never saw.” Satan’s landing is just like a sunspot, in other words … except it isn’t. As Fish says,

in the first line two focal points (spot and fiend) are offered the reader who sets them side by side in his mind … [and] a scene is formed, strengthened by the implied equality of spot and fiend; indeed the physicality of the impression is so persuasive that the reader is led to join the astronomer and looks with him through a reassuringly specific telescope (‘glaz’d optic Tube) to see—nothing at all (‘yet never saw’).

The effect is a more-elaborate version of that of sentences like “The old man the boats” or “We painted the wall with cracks”—typical examples of garden-path sentences. Yet why would Milton go to the trouble of constructing the simile if, in reality, the things being compared are nothing alike? It’s Fish’s answer to that question that made his mark on criticism.

Throughout Paradise Lost, Fish argues, Milton again and again constructs his language “in such a way that [an] error must be made before it can be acknowledged by the surprised reader.” That isn’t an accident: in a sense, it takes the writerly distinction between “showing” and “telling” to its end-point. After all, the poem is about the Fall of Man, and what better way to illustrate that Fall than by demonstrating it—the fallen state of humanity—within the reader’s own mind? As Fish says, “the reader’s difficulty”—that is, the continual state of thinking one thing, only to find out something else—“is the result of the act that is the poem’s subject.” What, that is, were Adam and Eve doing in the garden, other than believing things were one way (as related by one slippery serpent) when actually they were another? And Milton’s point is that trusting readers to absorb the lesson by merely being told it is just what got the primordial pair in trouble in the first place: why Paradise Lost needs writing at all is because our First Parents didn’t listen to what God told them (You know: don’t eat that apple).

If Fish is right, then Milton concluded that just to tell readers, whether of his time or ours, isn’t enough. Instead, he concocted a fantastic kind of riddle: an artifact where, just by reading it, the reader literally enacts the Fall of Man within his own mind. As the lines of the poem pass before the reader’s eyes, she continually credits the apparent sense of what she is reading, only to be brought up short by a sudden change in sense. Which is all very well, it might be objected, but even if that were true about Paradise Lost (and not everyone agrees that it is), it’s something else to say that it has anything to do with baseball umpiring—or golf.

Yet it does, and for just the same reason that Paradise Lost applies to wrangling over the strike zone. One reason why we couldn’t institute a system that could possibly just not see one pitch over another is because, while certainly we could take or leave most pitches—nobody cares about the first pitch of a game, for instance, or the middle out of the seventh inning during a Cubs-Rockies game in April—there are some pitches that we must absolutely know about. And if we consider what gives those pitches more value than other pitches—and surely everyone agrees that some pitches have more worth than others—then what we have to arrive at is that baseball doesn’t just take place on a diamond, but also takes place in time. Baseball is a narrative, not a pictorial, art.

To put it another way, what Milton does in his poem is just what a good golf architect does for the golf course: it isn’t enough to be told you should take a five-iron off this tee, while on another a three wood. The golfer has to be shown it: what you thought was one state of affairs was in fact another. And not merely that—because that, in itself, would only be another kind of telling—but that the golfer—or, at least, the reflective golfer—must come to see the point as he traverses the course. If a golf hole, in short, is a kind of sentence, then the assumptions with which he began the hole must be dashed by the time he reaches the green.

As it happens, this is just what the Golf Club Atlas says about the fourth at Chechessee Creek, where a “classic misdirection play comes.” At the fourth tee, “the golfer sees a big, long bunker that begins at the start of the fairway and hooks around the left side.” But the green is to the right, which causes the golfer to think “‘I’ll go that way and stay away from the big bunker.’” Yet, because there is a line of four small bunkers somewhat hidden down the right side, and bunkers to the right near the green, “the ideal tee ball is actually left center.” “Standing behind the hole”—that is, once play is over—“the left to right angle of the green is obvious and clearly shows that left center of the fairway is ideal,” which makes the fourth “the cleverest hole on the course.” And it is, so I’d argue, because it uses precisely the same technique as Milton.

That, in turn, might be the basis for an argument for why getting yardages by hand (or rather, foot) so necessary to the process of professional golf at the highest level. As I mentioned, amateur golfers think golf is about hitting shots while professionals know that golf is selecting what shots to hit. Amateurs look at a golf hole and think, “What a pretty picture,” while a professional looks at one and thinks of the sequence of shots it would take to reach the goal. That’s why it is so that, even though so much of golf design is mostly conjured by way of pretty pictures, whether in oils or photographic, and it might be thought that pictures, since they are “artistic,” are antithetical to the mechanistic forces of computers, it might be thought that it is the beauty of golf courses that make the game irreducible to analysis—an idea that, in fact, gets things precisely wrong.

Machines, that is, can paint a picture of a hole that can’t be beat: just look at the innumerable golf apps available for smart phones. But computers can’t parse a sentence like “Time flies like an arrow; fruit flies like a banana.” While computers can call (nearly) every pitch over the course of a season, they don’t know why a pitch in the seventh inning of a World Series game is more important than a spring training game. If everything is right there in front of you, then computers or some other mechanical aids are quite useful; it’s only when the end of a process causes you to re-evaluate everything that came before that you are in the presence of the human. Working out yardages without the aid of a machine forces the kind of calculations that can see a hole in time, not in space—to see a hole as a sequence of events, not (as it were) a whole.

Golf isn’t just the ability to hit shots—it’s also, and arguably more significantly, the ability to decide what the best path to the hole is. One argument for why further automation wouldn’t harm the game in the slightest is the tale told by baseball umpiring: no matter how far technological answers are sought, it’s still the case that human beings must be involved in calling balls and strikes, even if not in quite the same way as now. Some people, that is, might read Milton’s warning about astronomy as saying that pursuing that avenue of knowledge is a blind alley, when what Milton might instead be saying is just that the mistake is to think that there could be an end to the pursuit: that is, that perfect information could yield perfect decision-making. We extend “human perception” all we like—it will not make a whit of difference.

Milton thought that was because of our status as Original Sinners, but it isn’t necessary to take that line to acknowledge limitations, whether they are of the human animal in general or just endemic to living in a material universe. Some people appear to take this truth as a bit of a downer: if we cannot be Gods, what then is the point? Others, and this seems to be the point of Paradise Lost, take this as the condition of possibility: if we were Gods, then golf (for example) would be kind of boring, as merely the attempt to mechanically re-enact the same (perfect) swing, over and over. But Paradise Lost, at least in one reading, seems to assure us that that state is unachievable. As technology advances, so too will human cleverness: Bobby Jones can never defeat Walter Hagen once and for all.

Yet, as the example of Bob Gibson demonstrates, trusting to the idea that, somehow, everything will balance out in the end is just as dewy-eyed as anything else. Sports can ebb and flow in popularity: look at horse racing or boxing. Baseball reacted to Gibson’s 13 shutouts and Denny McLaine’s 31 victories in 1968, as well as Carl Yastrzemski’s heroic charge to a .301 batting average, the lowest average ever to win the batting crown. Throughout the 1960s, says Bill James in The New Bill James Historical Abstract, Gibson and his colleagues competed in a pitcher’s paradise: “the rules all stacked in their favor.” In 1969, the pitcher’s mound was lowered from 15 to 10 inches high and the strike zone was squeezed too, from the shoulders to the armpits, and from the calves to the top of the knee. The tide of the rules began to swing the other way, until the offensive explosion of the 1990s.

Nothing, in other words, happens in a vacuum. Allowing perfect yardages, so I would suspect, advantages the ballstrikers at the expense of the crafty shotmakers. To preserve the game then—a game which, contrary to some views, isn’t always the same, and changes in response to events—would require some compensating rule change in response. Just what that might be is hard, for me at least, to say at the moment. But it’s important, if we are to still have the game at all, to know what it is and is not, what’s worth preserving and why we’d like to preserve it. We can sum it up, I think, in one sentence. Golf is a story, not a picture. We ought to keep that which allows golf to continue to tell us the stories we want—and, perhaps, need—to hear.