The Color of Water

No one gets lucky til luck comes along.
Eric Clapton
     “It’s In The Way That You Use It”
     Theme Song for The Color of Money (1986).

 

 

The greenish tint to the Olympic pool wasn’t the only thing fishy about the water in Rio last month: a “series of recent reports,” Patrick Redford of Deadspin reported recently, “assert that there was a current in the pool at the Rio Olympics’ Aquatic Stadium that might have skewed the results.” Or—to make the point clear in a way the pool wasn’t—the water in the pool flowed in such a way that it gave the advantage to swimmers starting in certain lanes: as Redford writes, “swimmers in lanes 5 through 8 had a marked advantage over racers in lanes 1 through 4.” According, however, to ESPN’s Michael Wilbon—a noted African-American sportswriter—such results shouldn’t be of concern to people of color: “Advanced analytics,” Wilbon wrote this past May, “and black folks hardly ever mix.” To Wilbon, the rise of statistical analysis poses a threat to African-Americans. But Wilbon is wrong: in reality, the “hidden current” in American life holding back both black Americans and all Americans is not analytics—it’s the suspicions of supposedly “progressive” people like Michael Wilbon.

The thesis of Wilbon’s piece, “Mission Impossible: African-Americans and Analytics”—published on ESPN’s race-themed website, The Undefeated—was that black people have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” Whereas, in an earlier age, white people like Thomas Jefferson questioned black people’s literacy, nowadays, it seems, it’s ok to question their ability to understand mathematics—a “ridiculous” (according to The Guardian’s Dave Schilling, another black journalist) stereotype that Wilbon attempts to paint as, somehow, politically progressive: Wilbon, that is, excuses his absurd beliefs on the basis that analytics “seems to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” Yet, while Wilbon isn’t alone in his distrust of analytics, it’s actually just that “philosophy” that may hold the most promise for political progress—not only for African-Americans, but every American.

Wilbon’s argument, after all, depends on a common thesis heard in the classrooms of American humanities departments: when Wilbon says the “greater the dependence on the numbers, the more challenged people are to tell (or understand) the narrative without them,” he is echoing a common argument deployed every semester in university seminar rooms throughout the United States. Wilbon is, in other words, merely repeating the familiar contention, by now essentially an article of faith within the halls of the humanities, that without a framework—or (as it’s sometimes called), “paradigm”—raw statistics are meaningless: the doctrine sometimes known as “social constructionism.”

That argument is, as nearly everyone who has taken a class in the departments of the humanities in the past several generations knows, that “evidence” only points in a certain direction once certain baseline axioms are assumed. (An argument first put about, by the way, by the physician Galen in the second century AD.) As American literary critic Stanley Fish once rehearsed the argument in the pages of the New York Times, according to its terms investigators “do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description.” Instead, Fish went on, researchers “begin with the assumption (an act of faith) that the world is an object capable of being described … and they then develop procedures … that yield results, and they call those results reasons for concluding this or that.” According to both Wilbon and Fish, in other words, the answers people find depends not the structure of reality itself, but instead on the baseline assumptions the researcher begins with: what matters is not the raw numbers, but the contexts within which the numbers are interpreted.

What’s important, Wilbon is saying, is the “narrative,” not the numbers: “Imagine,” Wilbon says, “something as pedestrian as home runs and runs batted in adequately explaining [Babe] Ruth’s overall impact” on the sport of baseball. Wilbon’s point is that a knowledge of Ruth’s statistics won’t tell you about the hot dogs the great baseball player ate during games, or the famous “called shot” during the 1932 World Series—what he is arguing is that statistics only point toward reality: they aren’t reality itself. Numbers, by themselves, don’t say anything about reality; they are only a tool with which to access reality, and by no means the only tool available: in one of Wilbon’s examples Stef Curry, the great guard for the NBA’s Golden State Warriors, knew he shot better from the corners—an intuition that later statistical analysis bore out. Wilbon’s point is that both Curry’s intuition and statistical analysis told the same story, implying that there’s no fundamental reason to favor one road to truth over the other.

In a sense, to be sure, Wilbon is right: statistical analysis is merely a tool for getting at reality, not reality itself, and certainly other tools are available. Yet, it’s also true that, as statistician and science fiction author Michael F. Flynn has pointed out, astronomy—now accounted one of the “hardest” of physical sciences, because it deals with obviously real physical objects in space—was once not an observational science, but instead a mathematical one: in ancient times, Chinese astronomers were called “calendar-makers,” and a European astronomer was called a mathematicus. As Flynn says, “astronomy was not about making physical discoveries about physical bodies in the sky”—it was instead “a specialized branch of mathematics for making predictions about sky events.” Without telescopes, in other words, astronomers did not know what, exactly, say, the planet Mars was: all they could do was make predictions, based on mathematical analysis, about what part of the sky it might appear in next—predictions that, over the centuries, became perhaps-startlingly accurate. But as a proto-Wilbon might have said in (for instance) the year 1500, such astronomers had no more direct knowledge of what Mars is than a kindergartner has of the workings of the Federal Reserve.

In the same fashion, Wilbon might point out about the swimming events in Rio, there is no direct evidence of a current in the Olympic pool: the researchers who assert that there was such a current base their arguments on statistical evidence of the races, not examination of the conditions of the pool. Yet the evidence for the existence of a current is pretty persuasive: as the Wall Street Journal reported, fifteen of the sixteen swimmers, both men and women, who swam in the 50-meter freestyle event finals—the one event most susceptible to the influence of a current, because swimmers only swim one length of the pool in a single direction—swam in lanes 4 through 8, and swimmers who swam in outside lanes in early heats and inside lanes in later heats actually got slower. (A phenomena virtually unheard of in top level events like the Olympics.) Barry Revzin, of the website Swim Swam, found that a given Olympic swimmer picked up “a 0.2 percent advantage for each lane … closer to [lane] 8,” Deadspin’s Redford reported, and while that could easily seem “inconsequentially small,” Redford remarked, “it’s worth pointing out that the winner in the women’s 50 meter freestyle only beat the sixth-place finisher by 0.12 seconds.” It’s a very small advantage, in other words, which is to say that it’s very difficult to detect—except by means of the very same statistical analysis distrusted by Wilbon. But although it is a seemingly-small advantage, it is enough to determine the winner of the gold medal. Wilbon in other words is quite right to say that statistical evidence is not a direct transcript of reality—he’s wrong, however, if he is arguing that statistical analysis ought to be ignored.

To be fair, Wilbon is not arguing exactly that: “an entire group of people,” he says, “can’t simply refuse to participate in something as important as this new phenomenon.” Yet Wilbon is worried about the growth of statistical analysis because he views it as a possible means for excluding black people. If, as Wilbon writes, it’s “the emotional appeal,” rather than the “intellect[ual]” appeal, that “resonates with black people”—a statement that, if it were written by a white journalist, would immediately cause a protest—then Wilbon worries that, in a sports future run “by white, analytics-driven executives,” black people will be even further on the outside looking in than they already are. (And that’s pretty far outside: as Wilbon notes, “Nate McMillan, an old-school, pre-analytics player/coach, who was handpicked by old-school, pre-analytics player/coach Larry Bird in Indiana, is the only black coach hired this offseason.”) Wilbon’s implied stance, in other words—implied because he nowhere explicitly says so—is that since statistical evidence cannot be taken at face value, but only through screens and filters that owe more to culture than to the nature of reality itself, therefore the promise (and premise) of statistical analysis could be seen as a kind of ruse designed to perpetuate white dominance at the highest levels of the sport.

Yet there are at least two objections to make about Wilbon’s argument: the first being the empirical observation that in U.S. Supreme Court cases like McCleskey v. Kemp for instance (in which the petitioner argued that, according to statistical analysis, murderers of white people in Georgia were far more likely to receive the death penalty than murderers of black people), or Teamsters v. United States, (in which—according to Encyclopedia.com—the Court ruled, on the basis of statistical evidence, that the Teamsters union had “engaged in a systemwide practice of minority discrimination”), statistical analysis has been advanced to demonstrate the reality of racial bias. (A demonstration against which, by the way, time and again conservatives have countered with arguments against the reality of statistical analysis that essentially mirror Wilbon’s.) To think then that statistical analysis could be inherently biased against black people, as Wilbon appears to imply, is empirically nonsense: it’s arguable, in fact, that statistical analysis of the sort pioneered by people like sociologist Gunnar Myrdal has done at least as much, if not more, as (say) classes on African-American literature to combat racial discrimination.

The more serious issue, however, is a logical objection: Wilbon’s two assertions are in conflict with each other. To reach his conclusions, Wilbon ignores (like others who make similar arguments) the implications of his own reasoning: statistics ought to be ignored, he says, because only “narrative” can grant meaning to otherwise meaningless numbers—but, if it is so that numbers themselves cannot “mean” without a framework to grant them meaning, then they cannot pose the threat that Wilbon says they might. In other words, if Wilbon is right that statistical analysis is biased against black people, then it means that numbers do have meaning in themselves, while conversely if numbers can only be interpreted within a framework, then they cannot be inherently biased against black people. By Wilbon’s own account, in other words, nothing about statistical analysis implies that such analysis can only be pursued by white people, nor could the numbers themselves demand only a single (oppressive) use—because if that were so, then numbers would be capable of providing their own interpretive framework. Wilbon cannot logically advance both propositions simultaneously.

That doesn’t mean, however, that Wilbon’s argument—the argument, it ought to be noted, of many who think of themselves as politically “progressive”—is not having an effect: it’s possible, I think, that the relative success of this argument is precisely what is causing Americans to ignore a “hidden current” in American life. That current is could be described by an “analytical” observation made by professors Sven Steinmo and Jon Watts some two decades ago: “No other democratic system in the world requires support of 60% of legislators to pass government policy”—an observation that, in turn, may be linked to the observable reality that, as political scientists Frances E. Lee and Bruce Oppenheimer have noted, “less populous states consistently receive more federal funding than states with more people.” Understanding the impact of these two observations, and their effects on each other would, I suspect, throw a great deal of light on the reality of American lives, white and black—yet it’s precisely the sort of reflection that the “social construction” dogma advanced by Wilbon and company appears specifically designed to avoid. While to many, even now, the arguments for “social construction” and such might appear utterly liberatory, it’s possible to tell a tale in which it is just such doctrines that are the tools of oppression today.

Such an account would be, however—I suppose Michael Wilbon or Stanley Fish might tell us—simply a story about the one that got away.

Advertisements

Hot Shots

 

… when the sea was calm all boats alike
Show’d mastership in floating …
—William Shakespeare.
     Coriolanus Act IV, Scene 3 (1608).

 

 

“Indeed,” wrote the Canadian scholar Marshall McLuhan in 1964, “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” Once, it was a well-known line among literate people, though much less now. It occurred to me recently however as I read an essay by Walter Benn Michaels of the University of Illinois at Chicago, in the course of which Michaels took issue with Matthew Yglesias of Vox. Yglesias, Michaels tells us, tried to make the argument that

although “straight white intellectuals” might tend to think of the increasing economic inequality of the last thirty years “as a period of relentless defeat for left-wing politics,” we ought to remember that the same period has also seen “enormous advances in the practical opportunities available to women, a major decline in the level of racism … and wildly more public and legal acceptance of gays and lesbians.”

Michaels replies to Yglesias’ argument that “10 percent of the U.S. population now earns just under 50 percent of total U.S. income”—a figure that is, unfortunately, just the tip of the economic iceberg when it comes to inequality in America. But the real problem—the problem that Michaels’ reply does not do justice to—is that there just is a logical flaw in the kind of “left” that we have now: one that advocates for the rights of minorities rather than labors for the benefit of the majority. That is, a “cultural” left rather than a scientific one: the kind we had when, in 1910, American philosopher John Dewey could write (without being laughed at), that Darwin’s Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” When he was just twenty years old the physicist Freeman Dyson discovered why, when Winston Churchill’s government paid him to think about what was really happening in the flak-filled skies over Berlin.

The British had a desperate need to know, because they were engaged in bombing Nazi Germany at least back to the Renaissance. Hence they employed Dyson as a statistician, to analyze the operations of Britain’s Bomber Command. Specifically, Dyson was to investigate whether bomber crews “learned by experience”: if whether the more missions each crew flew, the better each crew became at blowing up Germany—and the Germans in it. Obviously, if they did, then Bomber Command could try to isolate what those crews were doing and teach what it was to the others so that Germany and the Germans might be blown up better.

The bomb crews themselves believed, Dyson tells us, that as “they became more skillful and more closely bonded, their chances of survival would improve”—a belief that, for obvious reasons, was “essential to their morale.” But as Dyson went over the statistics of lost bombers, examining the relation between experience and loss rates while controlling for the effects of weather and geography, he discovered the terrible truth:

“There was no effect of experience on loss rate.”

The lives of each bomber crew, in other words, were dependent on chance, not skill, and the belief in their own expertise was just an illusion in the face of horror—an illusion that becomes the more awful when you know that, out of the 125,000 air crews who served in Bomber Command, 55,573 were killed in action.

“Statistics and simple arithmetic,” Dyson therefore concluded, “tell us more about ourselves than expert intuition”: a cold lesson to learn, particularly at the age of twenty—though that can be tempered by the thought that at least it wasn’t Dyson’s job to go to Berlin. Still, the lesson is so appalling that perhaps it is little wonder that, after the war, it was largely forgotten, and has only been taken up again by a subject nearly as joyful as the business of killing people on an industrial scale is horrifying: sport.

In one of the most cited papers in the history of psychology, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Thomas Gilovich, Robert Vallone and Amos Tversky studied how “players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot”—but “detailed analysis … provided no evidence for a positive correlation between the outcomes of successive shots.” Just as, in other words, the British airmen believed some crews had “skill” that kept them in the air, when in fact all that kept them aloft was, say, the poor aim of a German anti-aircraft gunner or a happily-timed cloud, so too did the three co-authors find that, in basketball, people believed some shooters could get “hot.” That is, reel off seemingly impossible numbers of shots in a row, like when Ben Gordon, then with the Chicago Bulls, knocked down 9 consecutive three-pointers against Washington in 2006. But in fact hits and misses are reliant on a player’s skill, not his “luck”: toss a coin enough times and the coin will produce “runs” of heads and tails too.

The “hot hand” concept in fact applies to more than simply the players: it extends to coaches also. “In sports,” says Leonard Mlodinow in his book The Drunkard’s Walk: How Randomness Rules Our Lives, “we have developed a culture in which, based on intuitive feelings of correlation, a team’s success or failure is often attributed largely to the ability of the coach”—a reality that perhaps explains just why, as Florida’s Lakeland Ledger reported in in 2014, the average tenure of NFL coaches over the past decade has been 38 months. Yet as Mlodinow also says, “[m]athematical analysis of firings in all major sports … has shown that those firings had, on average, no effect on team performance”: fans (and perhaps more importantly, owners) tend to think of teams rising and falling based on their coach, while in reality a team’s success has more to do with the talent the team has.

Yet while sports are a fairly trivial part of most peoples’ lives, that is not true when it comes to our “coaches”: the managers that run large corporations. As Diane Stafford found out for the Kansas City Star a few years back, it turns out that American corporations have as little sense of the real value of CEOs as NFL owners have of their coaches: the “pay gap between large-company CEOs and average American employees,” Stafford said, “vaulted from 195 to 1 in 1993 to 354 to 1 in 2012.” Meanwhile, more than a third “of the men who appeared on lists ranking America’s 25 highest-paid corporate leaders between 1993 and 2012 have led companies bailed out by U.S. taxpayers, been fired for poor performance or led companies charged with fraud.” Just like the Lancasters flown by Dyson’s aircrews, American workers (and their companies’ stockholders) have been taken for a ride by men flying on the basis of luck, not skill.

Again, of course, many in what’s termed the “cultural” left would insist that they too, stand with American workers against the bosses, that they too, wish things were better, and they too, think paying twenty bucks for a hot dog and a beer is an outrage. What matters however isn’t what professors or artists or actors or musicians or the like say—just as it didn’t matter what Britain’s bomber pilots thought about their own skills during the war. What matters is what their jobs say. And the fact of the matter is that cultural production, whether it be in academia or in New York or in Hollywood, simply is the same as thinking you’re a hell of a pilot, or you must be “hot,” or Phil Jackson is a genius. That might sound counterintuitive, of course—I thought writers and artists and, especially, George Clooney were all on the side of the little guy!—but, like McLuhan says, what matters is the medium, not the message.

The point is likely easiest to explain in terms of the academic study of the humanities, because at least there people are forced to explain themselves in order to keep their jobs. What one finds, across the political spectrum, is some version of the same dogma: students in literary studies can, for instance, refer to American novelist James Baldwin’s insistence, in the 1949 essay “Everybody’s Protest Novel,” that “literature and sociology are not the same,” while, at the other end of the political spectrum, political science students can refer to Leo Strauss’ attack on “the ‘scientific’ approach to society” in his 1958 Thoughts on Machiavelli. Every discipline in the humanities has some version of the point, because without such a doctrine they couldn’t exist: without them, there’s just a bunch of people sitting in a room reading old books.

The effect of these dogmas can perhaps be best seen by reference to the philosophical version of it, which has the benefit of at least being clear. David Hume called it the “is-ought problem”; as the Scotsman claimed in  A Treatise of Human Nature, “the distinction of vice and virtue is not founded merely on the relations of objects.” Later, in 1903’s Principe Ethica, British philosopher G.E. Moore called the same point the “naturalistic fallacy”: the idea that, as J.B. Schneewind of Johns Hopkins has put it, “claims about morality cannot be derived from statements of facts.” The advantage for philosophers is clear enough: if it’s impossible to talk about morality or ethics strictly by the light of science, that certainly justifies talking about philosophy to the exclusion of anything else. But in light of the facts about shooting hoops or being killed by delusional Germans, I would hope that the absurdity of Moore’s “idea” ought to be self-evident: if it can be demonstrated that something is a matter of luck, and not skill, that changes the moral calculation drastically.

That then is the problem with running a “left” based around the study of novels or rituals or films or whatever: at the end of the day, the study of the humanities, just like the practice of the arts, discourages the thought that, as Mlodinow puts it, “chance events are often conspicuously misinterpreted as accomplishments or failures.” And without such a consideration, I would suggest, any talk of “values” or “morality” or whatever you would like to call it, is empty. It matters if your leader is lucky or skillful, it matters if success is the result of hard work or who your parents are—and a “left” built on the opposite premises is not, to my mind, a “left” at all. Although many people in the “cultural left,” then, might have the idea that their overt exhortations to virtue might outweigh the covert message being told by their institutional positions, reality tells a different tale: by telling people they can fly, you should not be shocked when they crash.

Closing With God in the City of Brotherly Love, or, How To Get A Head on the Pennsylvania Pike

However do senators get so close to God?
How is it that front office men never conspire?
—Nelson Algren.
“The Silver-Colored Yesterday.”
     Chicago: City on the Make (1951).

Sam Hinkie, the general manager of the Philadelphia 76ers—a basketball team in the National Basketball Association—resigned from his position this past week, citing the fact that he “no longer [had] the confidence” that he could “make good decisions on behalf of investors in the Sixers.” As writers from ESPN and many other outlets have observed, because the ownership of the Sixers had given him supervisors recently (the father-son duo of the Coangelos: Jerry and the other one), Hinkie had effectively been given a vote of no confidence. But the owners’ disapproval appears to have been more than simply a rejection of Hinkie: it also appears to be a rejection of the theory by which Hinkie conducted operations—a theory that Hinkie called “the Process.” It’s the destiny of this theory that’s concerning: the fate of the man Hinkie is irrelevant, but the fate of his idea is one that concerns all Americans—because the theory of “the Process” is also the theory of America. At least, according to one (former) historian.

To get from basketball to the fate of nations might appear quite a leap, of course—but that “the Process” applies to more than basketball can be demonstrated firstly by showing that it is (or perhaps, was) also more or less Tiger Woods’ theory about golf. As Tiger used to say, as he did for example in the press conferences for his wins at both the 2000 PGA Championship and the 2008 U.S. Open, the key to winning majors is “hanging around.” As the golfer said in 2012, the “thing is to keep putting myself [in contention]” (as Deron Snyder reported for The Root that year), or as he said in 2000, after he won the PGA Championship, “in a major championship you just need to hang around,” and also that “[i]n majors, if you kind of hang around, usually good things happen.” Eight years later, after the 2008 U.S. Open Championship (which he famously won on a broken leg), Woods said that “I was just hanging around, hanging around.” That is, Woods’ theory seems to have seen his task as a golfer to give himself the chance to win by staying near the lead—thereby giving destiny, or luck, or chance, the opportunity to put him over the top with a win.

That’s more or less the philosophy that guided Hinkie’s tenure at the head of the 76ers, though to understand it fully requires first understanding the intricacies of one of the cornerstones of life in the NBA: the annual player draft. Like many sport leagues, the NBA conducts a draft of new players each year, and also like many other leagues, teams select new players roughly in the order of their records in the previous season: i.e., the prior season’s league champion picks last. Conversely, teams that missed the last season’s playoffs participate in what’s become known as the “draft lottery”: all the teams that missed the playoffs are entered into the lottery, with their chances of receiving the first pick in the draft weighted by their win-loss records. (In other words, the worst team in the league has the highest chance of getting the first pick in the next season’s draft—but getting that pick is not guaranteed.) Hinkie’s “Process” was designed to take this reality of NBA life into account, along with the fact that, in today’s NBA, championships are won by “superstar” players: players, that is, that are selected in the “lottery” rounds of the draft.

Although in other sports, like for instance the National Football League, very good players can fall to very low rounds in their drafts, that is not the case in the contemporary NBA. While Tom Brady of the NFL’s New England Patriots was famously not drafted until the sixth round of the 2000 draft, and has since emerged as one of that league’s best players, stories like that simply do not happen in the NBA. As a study by FiveThirtyEight’s Ian Levy has shown, for example, in the NBA “the best teams are indeed almost always driven by the best players”—an idea that seems confirmed by the fact that the NBA is, as several studies have found, the easiest American professional sports league to bet. (As Noah Davis and Michael Lopez observed in 2015, also in FiveThirtyEight, in “hockey and baseball, even the worst teams are generally given a 1 in 4 chance of beating the best teams”—a figure nowhere near the comparable numbers in pro basketball.) In other words, in the NBA the favorite nearly always wins, a fact that would appear to correlate with the idea that NBA wins and losses are nearly always determined simply by the sheer talent of the players rather than, say, such notions as “team chemistry” or the abilities of a given coach.

With those facts in mind, then, the only possible path to an NBA championship—a goal that Hinkie repeatedly says was his—is to sign a transcendent talent to a team’s roster, and since (as experience has shown) it is tremendously difficult to sign an already-established superstar away from another team in the league, the only real path most teams have to such a talent is through the draft. But since such hugely capable players are usually only available as the first pick (though sometimes second, and very occasionally third—as Michael Jordan, often thought of as the best player in the history of the NBA, was drafted in 1984), that implies that the only means to a championship is first to lose a lot of games—and thus become eligible for a “lottery” draft pick. This was Sam Hinkie’s “Process”—a theory that sounded so odd to some that many openly mocked Hinkie’s notions: the website Deadspin for instance called Hinkie’s team a “Godless Abomination” in a headline.

Although surely the term was meant comedically, Deadspin’s headline writer in fact happens to have hit upon something central to both Woods’ and Hinkie’s philosophy: it seems entirely amenable to the great American saying, attributed to obscure writer Coleman Cox, that “I am a great believer in Luck: the harder I work, the more of it I seem to have.” Or to put it another way, “you make your own luck.” As can be seen, all of these notions leave the idea of God or any other supernatural agency to the side: God might exist, they imply, but it’s best to operate as if he doesn’t—a sentiment that might appear contrary to the “family values” often espoused by Republican politicians, as it seems merely a step away from disbelieving in God at all. But in fact, according to arch-conservative former Speaker of the House and sometime-presidential candidate, Newt Gingrich, this philosophy simply was the idea of the United States—at least until the 1960s came and wrecked everything. In reality however Gingrich’s idea that until the 1960s the United States was governed by the rules “don’t work, don’t eat” and “your salvation is spiritual” is not only entirely compatible with the philosophies of both Hinkie and Woods—but entirely opposed to the philosophy embodied by the United States Constitution.

To see that point requires seeing the difference between Philadelphia’s “76ers” and the Philadelphians who matter to Americans most today: the “87ers.” Whereas the major document produced in Philadelphia in 1776, in other words, held that “all men are created equal”—a statement that is perhaps most profitably read as a statement about probability, not in the sentimental terms with which it is often read—the major document produced in the same city over a decade later in 1787 is, as Seth Ackerman of the tiny journal Jacobin has pointed out, “a charter for plutocracy.” That is, whereas the cornerstone of the Declaration of Independence appears to be a promise in favor of the well-known principle of “one man, one vote,” the government constructed by the Constitution appears to have been designed according to an opposing principle: in the United States Senate, for instance, a single senator can hold up a bill the rest of the country demands, and “[w]hereas France can change its constitution anytime with a three-fifths vote of its Congress and Britain could recently mandate a referendum on instant runoff voting by a simple parliamentary majority,” as Ackerman says, “the U.S. Constitution requires the consent of no less than thirty-nine different legislatures comprising roughly seventy-eight separately elected chambers” [original emp.]. Pretty obviously, if it takes that much work to change the laws, that will clearly advantage those with pockets deep enough to extend to nearly every corner of the nation—a notion that cruelly ridicules the idea, first advanced in Philadelphia in 1776 and now espoused by Gingrich, Woods, and Hinkie, that with enough hard work “luck” will even out.

Current data, in fact, appear to support Ackerman’s contentions: as Edward Wolff, an economist at New York University and the author of Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It (a book published in 1996) noted online at The Atlantic’s website recently, “average real wages peaked in 1973.” “Median net worth,” Wolff goes on to report, “plummeted by 44 percent between 2007 and 2013 for middle income families, 61 percent for lower middle income families, and by 70 percent for low income families.” This is a pattern, as many social scientists have reported, consistent with the extreme inequality faced in very poor nations: nations usually also notable for their deviation from the “one man, one vote” principle. (Cf. the history of contemporary Russia, and then work backwards.) With that in mind, then, a good start for the United States might be if the entire U.S. Senate resigned—on the grounds that they cannot, any longer, make good decisions on behalf of the investors.

Our Game

truck with battle flag and bumper stickers
Pick-up truck with Confederate battle flag.

 

[Baseball] is our game: the American game … [it] belongs as much to our institutions, fits into them as significantly, as our constitutions, laws: is just as important in the sum total of our historic life.
—Walt Whitman. April, 1889.

The 2015 Chicago Cubs are now a memory, yet while they lived nearly all of Chicago was enthralled—not least because of the supposed prophesy of a movie starring a noted Canadian. For this White Sox fan, the enterprise reeked of the phony nostalgia baseball has become enveloped by, of the sort sportswriters like to invoke whenever they, for instance, quote Walt Whitman’s remark that baseball “is our game: the American game.” Yet even while, to their fans, this year’s Cubs were a time machine to what many envisioned as a simpler, and perhaps better, America—much as the truck pictured may be such a kind of DeLorean to its driver—in point of fact the team’s success was built upon precisely the kind of hatred of tradition that was the reason why Whitman thought baseball was “America’s game”: baseball, Whitman said, had “the snap, go, fling of the American character.” It’s for that reason, perhaps, that the 2015 Chicago Cubs may yet prove a watershed edition of the Lovable Losers: they might prove not only the return of the Cubs to the elite of the National League, but also the resurgence of a type of thinking that was of the vanguard in Whitman’s time and—like World Series appearances for the North Siders—of rare vintage since. It’s a resurgence that may, in a year of Donald Trump, prove far more important than the victories of baseball teams, no matter how lovable.

That, to say the least, is an ambitious thesis: the rise of the Cubs signifies little but that their new owners possess a lot of money, some might reply. But the Cubs’ return to importance was undoubtedly caused by the team’s adherence, led by former Boston general manager Theo Epstein, to the principles of what’s been called the “analytical revolution.” It’s a distinction that was made clear during the divisional series against the hated St. Louis Cardinals: whereas, for example, St. Louis manager Matt Matheny asserted, regarding how baseball managers ought to handle their pitching staff,  that managers “first and foremost have to trust our gut,” the Cubs’ Joe Maddon (as I wrote about in a previous post) spent his entire season doing such things as batting his pitcher eighth, on the grounds that statistical analysis showed that by doing so his team gained a nearly-infinitesimal edge. (Cf. “Why Joe Maddon bats the pitcher eighth” ESPN.com)

Since the Cubs hired former Boston Red Sox general manager Theo Epstein, few franchises in baseball have been as devoted to what is known as the “sabermetric” approach. When the Cubs hired him, Epstein was well-known for “using statistical evidence”—as the New Yorker’s Ben McGrath put it a year before Epstein’s previous team, the Boston Red Sox, overcame their own near-century of futility in 2004—rather than relying upon what Epstein’s hero, the storied Bill James, has called “baseball’s Kilimanjaro of repeated legend and legerdemain”—the sort embodied by the Cardinals’ Matheny apparent reliance on seat-of-the-pants judgement.

Yet, while Bill James’ sort of thinking may be astonishingly new to baseball’s old guard, it would have been old hat to Whitman, who had the example of another Bill James directly in front of him. To follow the sabermetric approach after all requires believing (as the American philosopher William James did according to the Internet Encyclopedia of Philosophy), “that every event is caused and that the world as a whole is rationally intelligible”—an approach that not only would Whitman have understood, but applauded.

Such at least was the argument of the late American philosopher Richard Rorty, whose lifework was devoted to preserving the legacy of late nineteenth and early twentieth century writers like Whitman and James. To Rorty, both of those earlier men subscribed to a kind of belief in America rarely seen today: both implicitly believed in what James’ follower John Dewey would call “the philosophy of democracy,” in which “both pragmatism and America are expressions of a hopeful, melioristic, experimental frame of mind.” It’s in that sense, Rorty argued, that William James’ famous assertion that “the true is only the expedient in our way of thinking” ought to be understood: what James meant by lines like this was that what we call “truth” ought to be tested against reality in the same way that scientists test their ideas about the world via experiments instead of relying upon “guts.”

Such a frame of mind however has been out of fashion in academia since at least the 1940s, Rorty often noted: for example, as early as the 1940s Robert Hutchins and Mortimer Adler of the University of Chicago were reviling the philosophy of Dewey and James as “vulgar, ‘relativistic,’ and self-refuting.” To say, as James did say, “that truth is what works” was—according to thinkers like Hutchins and Adler—“to reduce the quest for truth to the quest for power.” To put it another way, Hutchins and Adler provided the Ur Example of what’s become known as Godwin’s Law: the idea that, sooner or later, every debater will eventually claim that the opponent’s position logically ends up at Nazism.

Such thinking is by no means extinct in academia: indeed, in many ways Rorty’s work at the end of his life was involved in demonstrating how the sorts of arguments Hutchins and Adler enlisted for their conservative politics had become the very lifeblood of those supposedly opposed to the conservative position. That’s why, to those whom Rorty called the “Unpatriotic Academy,” the above picture—taken at a gas station just over the Ohio River in southern Indiana—will be confirmation of the view of the United States held by those who “find pride in American citizenship impossible,” and “associate American patriotism with an endorsement of atrocities”: to such people, America and science are more or less the same thing as the kind of nearly-explicit racism demonstrated in the photograph of the truck.

The problem with those sorts of arguments, Rorty wanted to claim in return, was that it is all-too willing to take the views of some conservative Americans at face value: the view that, for instance, “America is a Christian country.” That sentence is remarkable precisely because it is not taken from the rantings of some Southern fundamentalist preacher or Republican candidate, but rather is the opening sentence of an article by the novelist and essayist Marilynne Robinson in, of all places, the New York Review of Books. That it could appear so, I think Rorty would have said, shows just how much today’s academia really shares the views of its supposed opponents.

Yet, as Rorty was always arguing, the ideas held by the pragmatists are not so easily characterized as mere American jingoism as the many critics of Dewey and James and the rest would like to portray them as—nor is “America” so easily conflated with simple racism. That is because the arguments of the American pragmatists were (arguably) simply a restatement of a set of ideas held by a man who lived long before North America was even added to the world’s geography: a man known to history as Ibn Khaldun, who was born in Tunis on Africa’s Mediterranean coastline in the year 1332 of the Western calendar.

Khaldun’s views of history, as set out by his book Muqaddimah (“Introduction,” often known by its Greek title, Prolegemena), can be seen as the forerunners of the ideas of John Dewey and William James, as well as the ideas of Bill James and the front office of the Chicago Cubs. According to a short one-page biography of the Arab thinker by one “Dr. A. Zahoor,” for example, Khaldun believed that writing history required such things as “relating events to each other through cause and effect”—much as both men named William James believe[d] that baseball events are not inexplicable. As Khaldun himself wrote:

The rule for distinguishing what is true from what is false in history is based on its possibility or impossibility: That is to say, we must examine human society and discriminate between the characteristics which are essential and inherent in its nature and those which are accidental and need not be taken into account, recognizing further those which cannot possibly belong to it. If we do this, we have a rule for separating historical truth from error by means of demonstrative methods that admits of no doubt.

This statement is, I think, hardly distinguishable from what the pragmatists or the sabermetricians are after: the discovery of what Khaldun calls “those phenomena [that] were not the outcome of chance, but were controlled by laws of their own.” In just the same way that Bill James and his followers wish to discover things like when, if ever, it is permissible or even advisable to attempt to steal a base, or lay down a bunt (both, he says, are more often inadvisable strategies, precisely on the grounds that employing them leaves too much to chance), Khaldun wishes to discover ways to identify ideal strategies in a wider realm.

Assuming then that we could say that Dewey and James were right to claim that such ideas ought to be one and the same as the idea of “America,” then we could say that Ibn Khaldun, if not the first, was certainly one of the first Americans—that is, one of the first to believe in those ideas we would later come to call “America.” That Khaldun was entirely ignorant of such places as southern Indiana should, by these lights, no more count against his Americanness than Donald Trump’s ignorance of more than geography ought to count against his. Indeed, conducted according to this scale, it should be no contest as to which—between Donald Trump, Marilynn Robinson, and Ibn Khaldun—is the the more likely to be a baseball fan. Nor, need it be added, which the better American.

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.