Caterpillars

All scholars, lawyers, courtiers, gentlemen,
They call false caterpillars and intend their death.
2 Henry VI 

 

When Company A, 27th Armored Infantry Battalion, U.S. 1st Infantry Division (“the Big Red One”), reached the forested hills overlooking the Rhine in the early afternoon of 7 March, 1945, and found the Ludendorff Bridge still, improbably, standing, they may have been surprised to find that they had not only found the last passage beyond Hitler’s Westwall into the heart of Germany—but also stumbled into a controversy that is still, seventy years on, continuing. That controversy could be represented by an essay written some years ago by the Belgian political theorist Chantal Mouffe on the American philosopher Richard Rorty: the problem with Rorty’s work, Chantal claimed, was that he believed that the “enemies of human happiness are greed, sloth, and hypocrisy, and no deep analysis is required to understand how they could be eliminated.” Such beliefs are capital charges in intellectual-land, where the stock-in-trade is precisely the kind of “deep analysis” that Rorty thought (at least according to Mouffe) unnecessary, so it’s little wonder that, for the most part, it’s Mouffe who’s had the better part of this argument—especially considering Rorty has been dead since 2007. Yet as the men of Company A might have told Mouffe—whose work is known, according to her Wikipedia article, for her “use of the work of Carl Schmitt” (a legal philosopher who joined the Nazi Party on 1 May 1933)—it’s actually Rorty’s work that explains just why they came to the German frontier; an account whose only significance lies in the fact that it may be the ascendance of Mouffe’s view over Rorty’s that explains such things as, for instance, why no one was arrested after the financial crisis of 2007-08.

That may, of course, sound like something of a stretch: what could the squalid affairs that nearly led to the crash of the world financial system have in common with such recondite matters as the dark duels conducted at academic conferences—or a lucky accident in the fog of war? But the link in fact is precisely at the Ludendorff, sometimes called “the Bridge at Remagen”—a bridge that might not have been standing for Company A to find had the Nazi state really been the complicated ideological product described by people like Mouffe, instead of the product of “ruthless gangsters, distinguishable only by their facial hair” (as Rorty, following Vladimar Nabokov, once described Lenin, Trotsky, and Stalin). That’s because, according to (relatively) recent historical work that unfortunately has not yet deeply penetrated the English-speaking world, in March 1945 the German generals who had led the Blitzkrieg in 1940 and ’41—and then headed the defense of Hitler’s criminal empire—were far more concerned with the routing numbers of their bank accounts than the routes into Germany.

As “the ring closed around Germany in February, March, and April 1945”—wrote Ohio State historian Norman Goda in 2003—“and as thousands of troops were being shot for desertion,” certain high-ranking officers who, in some cases, had been receiving extra “monthly payments” directly from the German treasury on orders of Hitler himself “deposited into banks that were located in the immediate path of the enemy quickly arranged to have their deposits shifted to accounts in what they hoped would be in safer locales.” In other words, in the face of Allied advance, Hitler’s generals—men like Heinz Guderian, who in 1943 was awarded “Deipenhof, an estate of 937 hectares (2,313 acres) worth RM [Reichmark] 1.24 million” deep inside occupied Poland—were preoccupied with defending their money, not Germany.

Guderian—who led the tanks who broke the French lines at Sedan, the direct cause of the Fall of France in May 1940—was only one of many top-level military leaders who received secretive pay-offs even before the beginning of World War II: Walther von Brauchitsch, who was Guderian’s supervisor, had for example been getting—tax-free—double his salary since 1938, while Field Marshal Erhard Milch, who quit his prewar job of running Lufthansa to join the Luftwaffe, received a birthday “gift” from Hitler each year worth more than $100,000 U.S. Both of these were just two of many high military officers to receive such six-figure “birthday gifts,” or other payments, which Goda writes not only were “secret and dependent on behavior”—that is, on not telling anyone about the payments and on submission to Hitler’s will—but also “simply too substantial to have been viewed seriously as legitimate.” All of these characteristics, as any federal prosecutor will tell you, are hallmarks of corruption.

Such corruption, of course, was not limited to the military: the Nazis were, according to historian Jonathan Petropoulos “not only the most notorious murderers in history but also the greatest thieves.” Or as historian Richard J. Evans has noted, “Hitler’s rule [was] based not just on dictatorship, but also on plunder, theft and looting,” beginning with the “systematic confiscation of Jewish assets, beginning almost immediately on the Nazi seizure of power in 1933.” That looting expanded once the war began; at the end of September, 1939 for instance, Evans reports, the German government “decreed a blanket confiscation of Polish property.” Dutch historian Gerard Aalders has estimated that Nazi rule stole “the equivalent of 14 billion guilders in today’s money in Jewish-owned assets alone” from the Netherlands. In addition, Hitler and other Nazi leaders, like Herman Göring, were also known for stealing priceless artworks in conquered nations (the subject of the recent film, Monument Men). In the context of such thievery on such a grand scale it hardly appears a stretch to think they might pay off the military men who made it all possible. After all, the Nazis had been doing the same for civilian leaders virtually since the moment they took over the state apparatus in 1933.

Yet, there is one difference between the military leaders of the Third Reich and American leaders today—a difference perhaps revealed by their response when confronted after the war with the evidence of their plunder. At the “High Command Trial” at Nuremberg in the winter of 1947-’48, Walther von Brauchitsch and his colleague Franz Halder—who together led the Heer into France in 1940—denied that they ever took payments, even after confronted with clear evidence of just that. Milch, for instance, claimed that his “birthday present” was compensation for the loss of his Lufthansa job. All the other generals did the same: Goda notes that even Guderian, who was well-known for his Polish estate, “changed the dates and circumstances of the transfer in order to pretend that the estate was a legitimate retirement gift.” In short, they all denied it—which is interesting in the light of the fact that, as happened during the first Nuremberg trial on 3 January 1946, at one point a witness could casually admit to the murder of 90,000 people.

To admit receiving payments, in other words, was worse—to the generals—than admitting setting Europe alight for essentially no reason. That it was so is revealed by the fact that the legal silence was matched by similar silences in postwar memoirs and so on, none of which (except Guderian’s, who as mentioned fudged some details in his) ever admitted taking money directly from the national till. That silence implies, in the first place, a conscious knowledge that these payments were simply too large to be legitimate. And that, in turn, implies a consciousness not merely of guilt, but also of shame—a concept that is simply incoherent without an understanding of what the act underlying the payments actually is. The silence of the generals, that is, implies that the German generals had internalized a definition of corruption—unfortunately, however, a recent U.S. Supreme Court case, McDowell v. United States, suggests that Americans (or at least the Supreme Court) have no such definition.

The facts of the case were that Robert McDonnell, then governor of Virginia, received $175,000 in benefits from the executive officer of a company called Star Scientific, presumably because not only did Star Scientific want Virginia’s public universities to conduct research on their product, a “nutritional supplement” based on tobacco—but they felt McDonnell could conjure the studies. The burden of the prosecutor—according to Chief Justice John Roberts’ unanimous opinion—was then to show “that Governor McDonnell committed (or agreed to commit) an ‘official act’ in exchange for the loans and gifts.” At that point, then, the case turned on the definition of “official act.”

According to the federal bribery statute, an “official act” is

any decision or action on any question, matter, cause, suit, proceeding or controversy, which may at any time be pending, or which may by law be brought before any public official, in such official’s official capacity, or in such official’s place of trust or profit.

McDonnell, of course, held that the actions McDonnell admitted he took on Star Scientific’s behalf—including setting up meetings with other state officials, making phone calls, and hosting events—did not constitute an “official act” under the law. The federal prosecutors, obviously to the contrary, held they did.

To McDonnell, defining the acts he took on behalf of Star Scientific constituted a too-broad definition of “official act”: to him (or rather his attorneys), the government’s definition made “‘virtually all of a public servant’s activities ‘official,’ no matter how minor or innocuous.’” The prosecutors’ argued that a broad definition of crooked acts is necessary to combat corruption; McDonnell argued that the broad definition threatens the ability of public officials to act. Ultimately, his attorneys said, the broad nature of the anti-corruption statute threatens constitutional government itself.

In the end the Court accepted that argument. In John Roberts’ words, the acts McDonnell committed could not be defined as anything “more specific and focused than a broad policy objective.” In other words, sure McDonnell got a bunch of stuff from a constituent, and then he did a bunch of things for that constituent, but the things that he did did not constitute anything more than simply doing his job—a familiar defense, to be sure, at Nuremberg.

The effective upshot of McDowell, then, appears to be that the U.S. Supreme Court, at least, no longer has an adequate definition of corruption—which might appear to be a grandiose conclusion to hang on one court case, of course. But consider the response of Preet Bharara, former United States Attorney for the Southern District of New York, when he was asked by The New Yorker just why it was that his office did not prosecute anyone—anyone—in response to the financial meltdown of 2007-08. Sometimes, Bharara said in response, when “you see a building go up in flames, you have to wonder if there’s arsonyou see a building go up in flames, you have to wonder if there’s arson.” Sometimes, he continued, “it’s not arson, it’s an accident”—but sometimes “it is arson, and you can’t prove it.” Bharara’s comments suggested that the problem was an investigatory one: his detectives could not gather the right evidence. But McDonnell suggests that the problem may have been something else: a legal one, where the problem isn’t with the evidence but rather with the conceptual category required to use the evidence to prosecute a crime.

That something is going on is revealed by a report from Syracuse University’s Transactional Records Access Clearinghouse, or TRAC, which found that in 2011 the Department of Justice reported that prosecutions for financial crimes have been falling since the early 1990s—despite the fact that the economic crisis of 2007 and 2008 was driven by extremely questionable financial transactions. Other studies observe that Ronald Reagan, generally not thought to be a crusader type, prosecuted more financial crimes than did Barack Obama: in 2010, the Obama administration deported 393,000 immigrants—and prosecuted zero bankers.

The question, of course, is why that is so—to which any number of answers have been proposed. One, however, is especially resisted by those at the upper reaches of academia who are in the position of educating future federal prosecutors: people who, like Mouffe, think that

Democratic action … does not require a theory of truth and notions like unconditionality and universal validity but rather a variety of practices and pragmatic moves aimed at persuading people to broaden their commitments to others, to build a more inclusive community.

“Liberal democratic principles,” Mouffe goes on to claim, “can only be defended in a contextualist manner, as being constitutive of our form of life, and we should not try to ground our commitment to them on something supposedly safer”—that “something safer” being, I suppose, anything like the account ledgers of the German treasury from 1933 to 1945, which revealed the extent of Nazi corruption after the war.

To suggest, however, that there is a connection between the linguistic practices of professors and the failures of prosecutors is, of course, to engage in just the same style of argumentation as those who insist, with Mouffe, that it is “the mobilization of passions and sentiments, the multiplication of practices, institutions and language games that provide the conditions of possibility for democratic subjects and democratic forms of willing” that will lead to “the creation of a democratic ethos.” Among these are, for example, literary scholar Jane Tompkins, who once made a similar point by recommending, not “specific alterations in the current political and economic arrangements,” but instead “a change of heart.” But perhaps the rise of such a species of supposed “leftism” ought to be expected in an age characterized by vast economic inequality, which according to Nobel Prize-winning economist Joseph Stiglitz (a proud son of Gary, Indiana), “is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever.”`The only question left, one supposes, is what else has been bought; the state of academia these days, it appears, suggests that academics can’t even see the Rhine, much less point the way to a bridge across.

Advertisements

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.