Caterpillars

All scholars, lawyers, courtiers, gentlemen,
They call false caterpillars and intend their death.
2 Henry VI 

 

When Company A, 27th Armored Infantry Battalion, U.S. 1st Infantry Division (“the Big Red One”), reached the forested hills overlooking the Rhine in the early afternoon of 7 March, 1945, and found the Ludendorff Bridge still, improbably, standing, they may have been surprised to find that they had not only found the last passage beyond Hitler’s Westwall into the heart of Germany—but also stumbled into a controversy that is still, seventy years on, continuing. That controversy could be represented by an essay written some years ago by the Belgian political theorist Chantal Mouffe on the American philosopher Richard Rorty: the problem with Rorty’s work, Chantal claimed, was that he believed that the “enemies of human happiness are greed, sloth, and hypocrisy, and no deep analysis is required to understand how they could be eliminated.” Such beliefs are capital charges in intellectual-land, where the stock-in-trade is precisely the kind of “deep analysis” that Rorty thought (at least according to Mouffe) unnecessary, so it’s little wonder that, for the most part, it’s Mouffe who’s had the better part of this argument—especially considering Rorty has been dead since 2007. Yet as the men of Company A might have told Mouffe—whose work is known, according to her Wikipedia article, for her “use of the work of Carl Schmitt” (a legal philosopher who joined the Nazi Party on 1 May 1933)—it’s actually Rorty’s work that explains just why they came to the German frontier; an account whose only significance lies in the fact that it may be the ascendance of Mouffe’s view over Rorty’s that explains such things as, for instance, why no one was arrested after the financial crisis of 2007-08.

That may, of course, sound like something of a stretch: what could the squalid affairs that nearly led to the crash of the world financial system have in common with such recondite matters as the dark duels conducted at academic conferences—or a lucky accident in the fog of war? But the link in fact is precisely at the Ludendorff, sometimes called “the Bridge at Remagen”—a bridge that might not have been standing for Company A to find had the Nazi state really been the complicated ideological product described by people like Mouffe, instead of the product of “ruthless gangsters, distinguishable only by their facial hair” (as Rorty, following Vladimar Nabokov, once described Lenin, Trotsky, and Stalin). That’s because, according to (relatively) recent historical work that unfortunately has not yet deeply penetrated the English-speaking world, in March 1945 the German generals who had led the Blitzkrieg in 1940 and ’41—and then headed the defense of Hitler’s criminal empire—were far more concerned with the routing numbers of their bank accounts than the routes into Germany.

As “the ring closed around Germany in February, March, and April 1945”—wrote Ohio State historian Norman Goda in 2003—“and as thousands of troops were being shot for desertion,” certain high-ranking officers who, in some cases, had been receiving extra “monthly payments” directly from the German treasury on orders of Hitler himself “deposited into banks that were located in the immediate path of the enemy quickly arranged to have their deposits shifted to accounts in what they hoped would be in safer locales.” In other words, in the face of Allied advance, Hitler’s generals—men like Heinz Guderian, who in 1943 was awarded “Deipenhof, an estate of 937 hectares (2,313 acres) worth RM [Reichmark] 1.24 million” deep inside occupied Poland—were preoccupied with defending their money, not Germany.

Guderian—who led the tanks who broke the French lines at Sedan, the direct cause of the Fall of France in May 1940—was only one of many top-level military leaders who received secretive pay-offs even before the beginning of World War II: Walther von Brauchitsch, who was Guderian’s supervisor, had for example been getting—tax-free—double his salary since 1938, while Field Marshal Erhard Milch, who quit his prewar job of running Lufthansa to join the Luftwaffe, received a birthday “gift” from Hitler each year worth more than $100,000 U.S. Both of these were just two of many high military officers to receive such six-figure “birthday gifts,” or other payments, which Goda writes not only were “secret and dependent on behavior”—that is, on not telling anyone about the payments and on submission to Hitler’s will—but also “simply too substantial to have been viewed seriously as legitimate.” All of these characteristics, as any federal prosecutor will tell you, are hallmarks of corruption.

Such corruption, of course, was not limited to the military: the Nazis were, according to historian Jonathan Petropoulos “not only the most notorious murderers in history but also the greatest thieves.” Or as historian Richard J. Evans has noted, “Hitler’s rule [was] based not just on dictatorship, but also on plunder, theft and looting,” beginning with the “systematic confiscation of Jewish assets, beginning almost immediately on the Nazi seizure of power in 1933.” That looting expanded once the war began; at the end of September, 1939 for instance, Evans reports, the German government “decreed a blanket confiscation of Polish property.” Dutch historian Gerard Aalders has estimated that Nazi rule stole “the equivalent of 14 billion guilders in today’s money in Jewish-owned assets alone” from the Netherlands. In addition, Hitler and other Nazi leaders, like Herman Göring, were also known for stealing priceless artworks in conquered nations (the subject of the recent film, Monument Men). In the context of such thievery on such a grand scale it hardly appears a stretch to think they might pay off the military men who made it all possible. After all, the Nazis had been doing the same for civilian leaders virtually since the moment they took over the state apparatus in 1933.

Yet, there is one difference between the military leaders of the Third Reich and American leaders today—a difference perhaps revealed by their response when confronted after the war with the evidence of their plunder. At the “High Command Trial” at Nuremberg in the winter of 1947-’48, Walther von Brauchitsch and his colleague Franz Halder—who together led the Heer into France in 1940—denied that they ever took payments, even after confronted with clear evidence of just that. Milch, for instance, claimed that his “birthday present” was compensation for the loss of his Lufthansa job. All the other generals did the same: Goda notes that even Guderian, who was well-known for his Polish estate, “changed the dates and circumstances of the transfer in order to pretend that the estate was a legitimate retirement gift.” In short, they all denied it—which is interesting in the light of the fact that, as happened during the first Nuremberg trial on 3 January 1946, at one point a witness could casually admit to the murder of 90,000 people.

To admit receiving payments, in other words, was worse—to the generals—than admitting setting Europe alight for essentially no reason. That it was so is revealed by the fact that the legal silence was matched by similar silences in postwar memoirs and so on, none of which (except Guderian’s, who as mentioned fudged some details in his) ever admitted taking money directly from the national till. That silence implies, in the first place, a conscious knowledge that these payments were simply too large to be legitimate. And that, in turn, implies a consciousness not merely of guilt, but also of shame—a concept that is simply incoherent without an understanding of what the act underlying the payments actually is. The silence of the generals, that is, implies that the German generals had internalized a definition of corruption—unfortunately, however, a recent U.S. Supreme Court case, McDowell v. United States, suggests that Americans (or at least the Supreme Court) have no such definition.

The facts of the case were that Robert McDonnell, then governor of Virginia, received $175,000 in benefits from the executive officer of a company called Star Scientific, presumably because not only did Star Scientific want Virginia’s public universities to conduct research on their product, a “nutritional supplement” based on tobacco—but they felt McDonnell could conjure the studies. The burden of the prosecutor—according to Chief Justice John Roberts’ unanimous opinion—was then to show “that Governor McDonnell committed (or agreed to commit) an ‘official act’ in exchange for the loans and gifts.” At that point, then, the case turned on the definition of “official act.”

According to the federal bribery statute, an “official act” is

any decision or action on any question, matter, cause, suit, proceeding or controversy, which may at any time be pending, or which may by law be brought before any public official, in such official’s official capacity, or in such official’s place of trust or profit.

McDonnell, of course, held that the actions McDonnell admitted he took on Star Scientific’s behalf—including setting up meetings with other state officials, making phone calls, and hosting events—did not constitute an “official act” under the law. The federal prosecutors, obviously to the contrary, held they did.

To McDonnell, defining the acts he took on behalf of Star Scientific constituted a too-broad definition of “official act”: to him (or rather his attorneys), the government’s definition made “‘virtually all of a public servant’s activities ‘official,’ no matter how minor or innocuous.’” The prosecutors’ argued that a broad definition of crooked acts is necessary to combat corruption; McDonnell argued that the broad definition threatens the ability of public officials to act. Ultimately, his attorneys said, the broad nature of the anti-corruption statute threatens constitutional government itself.

In the end the Court accepted that argument. In John Roberts’ words, the acts McDonnell committed could not be defined as anything “more specific and focused than a broad policy objective.” In other words, sure McDonnell got a bunch of stuff from a constituent, and then he did a bunch of things for that constituent, but the things that he did did not constitute anything more than simply doing his job—a familiar defense, to be sure, at Nuremberg.

The effective upshot of McDowell, then, appears to be that the U.S. Supreme Court, at least, no longer has an adequate definition of corruption—which might appear to be a grandiose conclusion to hang on one court case, of course. But consider the response of Preet Bharara, former United States Attorney for the Southern District of New York, when he was asked by The New Yorker just why it was that his office did not prosecute anyone—anyone—in response to the financial meltdown of 2007-08. Sometimes, Bharara said in response, when “you see a building go up in flames, you have to wonder if there’s arsonyou see a building go up in flames, you have to wonder if there’s arson.” Sometimes, he continued, “it’s not arson, it’s an accident”—but sometimes “it is arson, and you can’t prove it.” Bharara’s comments suggested that the problem was an investigatory one: his detectives could not gather the right evidence. But McDonnell suggests that the problem may have been something else: a legal one, where the problem isn’t with the evidence but rather with the conceptual category required to use the evidence to prosecute a crime.

That something is going on is revealed by a report from Syracuse University’s Transactional Records Access Clearinghouse, or TRAC, which found that in 2011 the Department of Justice reported that prosecutions for financial crimes have been falling since the early 1990s—despite the fact that the economic crisis of 2007 and 2008 was driven by extremely questionable financial transactions. Other studies observe that Ronald Reagan, generally not thought to be a crusader type, prosecuted more financial crimes than did Barack Obama: in 2010, the Obama administration deported 393,000 immigrants—and prosecuted zero bankers.

The question, of course, is why that is so—to which any number of answers have been proposed. One, however, is especially resisted by those at the upper reaches of academia who are in the position of educating future federal prosecutors: people who, like Mouffe, think that

Democratic action … does not require a theory of truth and notions like unconditionality and universal validity but rather a variety of practices and pragmatic moves aimed at persuading people to broaden their commitments to others, to build a more inclusive community.

“Liberal democratic principles,” Mouffe goes on to claim, “can only be defended in a contextualist manner, as being constitutive of our form of life, and we should not try to ground our commitment to them on something supposedly safer”—that “something safer” being, I suppose, anything like the account ledgers of the German treasury from 1933 to 1945, which revealed the extent of Nazi corruption after the war.

To suggest, however, that there is a connection between the linguistic practices of professors and the failures of prosecutors is, of course, to engage in just the same style of argumentation as those who insist, with Mouffe, that it is “the mobilization of passions and sentiments, the multiplication of practices, institutions and language games that provide the conditions of possibility for democratic subjects and democratic forms of willing” that will lead to “the creation of a democratic ethos.” Among these are, for example, literary scholar Jane Tompkins, who once made a similar point by recommending, not “specific alterations in the current political and economic arrangements,” but instead “a change of heart.” But perhaps the rise of such a species of supposed “leftism” ought to be expected in an age characterized by vast economic inequality, which according to Nobel Prize-winning economist Joseph Stiglitz (a proud son of Gary, Indiana), “is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever.”`The only question left, one supposes, is what else has been bought; the state of academia these days, it appears, suggests that academics can’t even see the Rhine, much less point the way to a bridge across.

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

All Even

George, I am an old man, and most people hate me.
But I don’t like them either so that makes it all even.

—Mr. Potter. It’s A Wonderful Life (1946).

 

dscf3230-1

Because someone I love had never seen it, I rewatched Frank Capra’s 1946 It’s A Wonderful Life the other night. To most people, the film is the story of how one George Bailey comes to perceive the value of helping “a few people get outta [the] slums” of the “scurvy little spider” of the film, the wealthy banker Mr. Potter—but to some viewers, what’s important about the inhabitants of Bedford Falls isn’t that they are poor by comparison to Potter, but instead that some of them are black: the man who plays the piano in the background of one scene, for instance, or Annie, the Bailey family’s maid. To Vincent Nobile, a professor of history at Rancho Cucamonga’s Chaffey College, the casting of these supporting roles not only demonstrates that “Capra showed no indication he could perceive blacks in roles outside the servant class,” but also that Potter is the story’s villain not because he is a slumlord, but because he calls the people Bailey helps “garlic eaters” (http://historynewsnetwork.org/article/1846). What makes Potter evil, in other words, isn’t his “cold monetary self-interest,” but because he’s “bigoted”: to this historian, Capra’s film isn’t the heartwarming story of how Americans banded together to stop a minority (rich people) from wrecking things, but instead the horrifying tragedy of how Americans banded together to stop a minority (black people) from wrecking things. Unfortunately, there’s two problems with that view—problems that can be summarized by referring to the program for a football game that took place five years before the release of Capra’s classic: the Army-Navy game of 29 November, 1941.

Played at Philadelphia’s Franklin Memorial Stadium (once home of the NFL’s Philadelphia Eagles and still the home of the Penn Relays, one of track and field’s premier events), Navy won the contest 14-6; according to Vintage College Football Programs & Collectibles (collectable.wordpress.com [sic]), the program for that game contains 212 pages. On page 180 of that program there is a remarkable photograph. It is of the USS Arizona, the second and last of the American “Pennsylvania” class of super-dreadnought battleships—a ship meant to be, according to the New York Times of 13 July 1913, “the world’s biggest and most powerful, both offensively and defensively, superdreadnought ever constructed.” The last line of the photograph’s caption reads thusly:

It is significant that despite the claims of air enthusiasts, no battleship has yet been sunk by bombs.”

Slightly more than a week later, of course, on a clear bright Sunday morning just after 8:06 Hawaiian time, the hull of the great ship would rest on the bottom of Pearl Harbor, along with the bodies of nearly 1200 of her crew—struck down by the “air enthusiasts” of the Empire of the Sun. The lesson taught that morning, by aircraft directed by former Harvard student Isoroku Yamamoto, was a simple one: that “a saturation attack by huge numbers of low-value attackers”—as Pando Daily’s “War Nerd” columnist, Gary Brecher, has referred to this type of attack—can bring down nearly any target, no matter how powerful (http://exiledonline.com/the-war-nerd-this-is-how-the-carriers-will-die/all/1/). (A lesson that the U.S. Navy has received more than once: in 2002, for instance, when during the wargame “Millennium Challenge 2002” Marine Corps Lieutenant General Paul K. Riper (fictionally) sent 16 ships to the bottom of the Persian Gulf with the creative use of, essentially, a bunch of cruise missiles and several dozen speedboats loaded with cans of gasoline driven by gentlemen with, shall we say, a cavalier approach to mortality.) It’s the lesson that the cheap and shoddy can overcome quality—or in other words that, as the song says, the bigger they come, the harder they fall.

It’s a lesson that applies to more than merely the physical plane, as the Irish satirist Jonathan Swift knew: “Falsehood flies, and the Truth comes limping after,” the author of Gulliver’s Travels wrote in 1710. What Swift refers to is how saturation attacks can work on the intellectual as well as physical plane—as Emory University’s Mark Bauerlein (who, unfortunately for the warmth of my argument’s reception, endorsed Donald Trump in this past election) argued, in Partisan Review in 2001, American academia has over the past several generations essentially become flooded with the mental equivalents of Al Qaeda speedboats. “Clear-sighted professors,” Bauerlein wrote then, understanding the conditions of academic research, “avoid empirical methods, aware that it takes too much time to verify propositions about culture, to corroborate facts with multiple sources, to consult primary documents, and to compile evidence adequate to inductive conclusions” (http://www.bu.edu/partisanreview/books/PR2001V68N2/HTML/files/assets/basic-html/index.html#226). Discussing It’s A Wonderful Life in terms of, say, the economic differences between banks like the one owned by Potter and the savings-and-loan run by George Bailey—and the political consequences therein—is, in other words, hugely expensive in terms of time and effort invested: it’s much more profitable to discuss the film in terms of its hidden racism. By “profitable,” in other words, I mean not merely because it’s intrinsically easier, but also because such a claim is much more likely to upset people, and thus attract attention to its author: the crass stunt once called épater le bourgeois.

The current reward system of the humanities, in other words, favors those philosopher Isaiah Berlin called “foxes” (who know a great many things) rather than “hedgehogs” (who know one important thing). To the present defenders of the humanities, of course, such is the point: that’s the pro-speedboat argument noted feminist literary scholar Jane Tompkins made so long ago as 1981, in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of American Literary History.” There, Tompkins suggested that the “political and economic measures”—i.e., the battleships of American political discourse—“that constitute effective action for us” are, in reality, merely “superficial”: instead, what’s necessary are “not specific alterations in the current political and economic arrangements, but rather a change of heart” (http://engl651-jackson.wikispaces.umb.edu/file/view/Sentimental+Power.pdf). To those who think like Tompkins—or apparently, Nobile—discussing It’s A Wonderful Life in terms of economics is to have missed the point entirely: what matters, according to them, isn’t the dreadnought clash of, for example, the unit banking system of the antebellum North (speedboats) versus the branch banking system of the antebellum South (battleships) within the sea of the American economy. (A contest that, incidentally, not only did branch banking largely win in 1994, during Bill Clinton’s administration, but a victory that in turn—because it helped to create the enormous “too big to fail” interstate banks of today—arguably played no small role in the crash of 2008). Instead, what’s important is the seemingly-minor attack of a community college teacher upon a Titanic of American culture. Or, to put the point in terms popularized by Silicon Valley: the sheer BS quality of Vincent Nobile’s argument about It’s A Wonderful Life isn’t a bug—it’s a feature.

There is, however, one problem with such tactics—the same problem described by Rear Admiral Chuichi (“King Kong”) Hara of the Imperial Japanese Navy after the Japanese surrender in September 1945: “We won a great tactical victory at Pearl Harbor—and thereby lost the war.” Although, as the late American philosopher Richard Rorty commented before his death in his Achieving Our Country: Leftist Thought in Twentieth Century America, “[l]eftists in the academy” have, in collaboration with “the Right,” succeeded in “making cultural issues central to public debate,” that hasn’t necessarily resulted in a victory for leftists, or even liberals (https://www.amazon.com/Achieving-Our-Country-Leftist-Twentieth-Century/dp/0674003128). Indeed, there’s some reason to suppose that, by discouraging certain forms of thought within left-leaning circles, academic leftists in the humanities have obscured what Elizabeth Drew, in the New York Review of Books, has called “unglamorous structural questions” in a fashion ultimately detrimental not merely to minority communities, but ultimately all Americans (http://www.nybooks.com/articles/2016/08/18/american-democracy-betrayed/).

What Drew was referring to this past August was such matters as how—in the wake of the 2010 Census and the redistricting it entailed in every state in the Union—the Democrats ended up, in the 2012 election cycle, winning the popular vote for Congress “by 1.2 per cent, but still remained in the minority, with two hundred and one seats to the G.O.P.’s two hundred and thirty-four.” In other words, Democratic candidates for the House of Representatives got, as Katie Sanders noted in Politifact in 2013, “50.59 percent of the two-party vote” that November, but “won just 46.21 percent of seats”: only “the second time in 70 years that a party won the majority of the vote but didn’t win a majority of the House seats” (http://www.politifact.com/truth-o-meter/statements/2013/feb/19/steny-hoyer/steny-hoyer-house-democrats-won-majority-2012-popu/). The Republican advantage didn’t end there: as Rob Richie reported for The Nation in 2014, in that year’s congressional races Republicans won “about 52 percent of votes”—but ended “up with 57 percent of seats” (https://www.thenation.com/article/republicans-only-got-52-percent-vote-house-races/). And this year, the numbers suggest, the Republicans received less than half the popular vote—but will end up with fifty-five percent (241) of the total seats (435). These losses, Drew suggests, are ultimately due to the fact that “the Democrats simply weren’t as interested in such dry and detailed stuff as state legislatures and redistricting”—or, to put it less delicately, because potentially-Democratic schemers have been put to work constructing re-readings of old movies instead of building arguments that are actually politically useful.

To put this even less delicately, many people on the liberal or left-wing side of the political aisle have, for the past several generations, spent their college educations learning, as Mark Bauerlein wrote back in 2001, how to “scoff[…] at empirical notions, chastising them as ‘näive positivism.’” At the same time, a tiny minority among them—those destined to “relax their scruples and select a critical practice that fosters their own professional survival”—have learned, and are learning, to swim the dark seas of academia, taught by their masters how to live by feeding upon the minds of essentially defenseless undergraduates. The lucky ones, like Vince Nobile, manage—by the right mix of bowing and scraping—to land some kind of job security at some far-flung outpost of academia’s empire, where they make a living entertaining the yokels; the less-successful, of course, write deeply ironic blogs.

Be that as it may, while there isn’t necessarily a connection between the humanistic academy’s flight from what Bauerlein calls “the canons of logic” and the fact that it was so easy—as John Cassidy of The New Yorker observed after this past presidential election—for so many in the American media and elsewhere “to dismiss the other outcome [i.e., Trump’s victory] as a live possibility” before the election, Cassidy at least ascribed the ease with which so many predicted a Clinton victory then to the fact that many “haven’t been schooled in how to think in probabilistic terms” (http://www.newyorker.com/news/john-cassidy/media-culpa-the-press-and-the-election-result). That lack of education, which extends from the impact of mathematics upon elections to the philosophical basis for holding elections at all (which extends far beyond the usual seventeenth-century suspects rounded up in even the most erudite of college classes to medieval thinkers like Nicholas of Cusa, who argued in 1434’s Catholic Concordance that the “greater the agreement, the more infallible the judgment”—or in other words that speedboats are more trustworthy than battleships), most assuredly has had political consequences (http://www.cambridge.org/us/academic/subjects/politics-international-relations/texts-political-thought/nicholas-cusa-catholic-concordance?format=PB&isbn=9780521567732). While the ever-more abstruse academic turf wars between the sciences and the humanities might be good for the ever-dwindling numbers of tenured college professors, in other words, it’s arguably disastrous, not only for Democrats and the populations they serve, but for the country as a whole. Although Clarence, angel second class, says to George Bailey, “we don’t use money in Heaven”—suggesting the way in which American academics swear off knowledge of the sciences upon entering their secular priesthood—George replies, “it comes in real handy down here, bub.” What It’s A Wonderful Life wants to tell us is that a nation whose leadership balances so precariously upon such a narrow educational foundation is, no matter what the program says, as vulnerable as a battleship on a bright Pacific morning.

Or a skyscraper, on a cloudless September one.

Eat The Elephant

Well, gentlemen. Let’s go home.
Sink the Bismarck! (1960).

Someday someone will die and the public will not understand why we were not more effective and throwing every resource we had at certain problems.
—FBI Field Office, New York City, to FBI Headquarters, Washington, D.C.
29 August, 2001.

 

Simon Pegg, author of the latest entry in the Star Trek franchise, Star Trek Beyond, explained the new film’s title in an interview over a year ago: the studio in charge of the franchise, Pegg said, thought that Star Trek was getting “a little too Star Trek-y.” One scene in particular seems to designed to illustrate graphically just how “beyond” Beyond is willing to go: early on, the fabled starship Enterprise is torn apart by (as Michael O’Sullivan of the Washington Post describes it) “a swarm of mini-kamikaze ships called ‘bees.’” The scene is a pretty obvious signal of the new film’s attitude toward the past—but while the destruction of the Enterprise very well might then be read as a kind of meta-reference to the process of filmmaking (say, how movies, which are constructed by teams of people over years of work, can be torn apart by critics in a virtual instant), another way to view the end of the signature starship is in the light of how Star Trek’s original creator, Gene Rodenberry originally pitched Star Trek: as “space-age Captain Horatio Hornblower.” The demise of the Enterprise is, in other words, a perfect illustration of a truth about navies these days: navies today are examples of the punchline of the old joke about how to eat an elephant. (“One bite at a time.”) The payoff for thinking about Beyond in this second way, I would argue, is that it leads to much clearer thinking about things other than stories about aliens, or even stories themselves—like, say, American politics, where the elephant theory has held sway for some time.

“Starfleet,” the fictional organization employing James T. Kirk, Spock, and company has always been framed as a kind of space-going navy—and as Pando Daily’s “War Nerd,” Gary Brecher, pointed out so long ago as 2002, navies are anachronistic in reality. Professionals know, as Brecher wrote fourteen years ago, that “every one of those big fancy aircraft carriers we love”—massive ships much like the fictional Enterprise—“won’t last one single day in combat against a serious enemy.” The reason we know this is not merely because of the attack on the USS Cole in 2000, which showed how two Al Qaeda guys in a thousand-dollar speedboat could blow a 250 million dollar-sized hole in a 2 billion dollar warship, but also because—as Brecher points out in his piece—of research conducted by the U.S. military itself: a war game entitled “Millennium Challenge 2002.”

“Millennium Challenge 2002,” which conveniently took place in 2002, pitted an American “Blue” side against a fictional “Red” force (believed to be a representation of Iran). The commander of “Red” was Marine Corps Lieutenant General Paul K. Riper, who was hired because, in the words of his superiors, he was a “devious sort of guy”—though in the event, he proved to live up to his billing a little too well for the Pentagon’s taste. Taking note of the tactics used against the Cole, Riper attacked Blue’s warships with cruise missiles and a few dozen speedboats loaded with enormous cans of gasoline and driven by gentlemen with an unreasonable belief in the afterlife—a (fictional) attack that sent 19 U.S. Navy vessels to the bottom in perhaps 10 minutes. In doing so, Riper effectively demonstrated the truth also illustrated by the end of the Enterprise in Beyond: that large naval vessels are redundant.

Even warships like the U.S. Navy’s latest supercarrier, the Gerald R. Ford—a floating city capable of completely leveling other cities of the non-floating variety—are nevertheless, as Brecher writes elsewhere, “history’s most expensive floating targets.” That’s because they’re vulnerable to exactly the sort of assault that takes down Enterprise: “a saturation attack by huge numbers of low-value attackers, whether they’re Persians in Cessnas or mass-produced Chinese cruise missiles.” They’re as vulnerable, in other words, as elephants are according to the old joke. Yet, whereas that might be a revolutionary insight in the military, the notion that with enough mice, even an elephant falls is old hat within American political circles.

After all, American politics has, at least since the 1980s, proceeded only by way of “saturation attacks by huge numbers of low-value attackers.” That was the whole point of what are now sometimes called “the culture wars.” During the 1980s and 1990s, as the late American philosopher Richard Rorty put the point before his death, liberals and conservatives conspired together to allow “cultural politics to supplant real politics,” and for “cultural issues” to become “central to public debate.” In those years, it was possible to gain a name for oneself within departments of the humanities by attacking the “intrinsic value” of literature (while ignoring the fact that those arguments were isomorphic with similar ideas being cooked up in economics departments), while conversely, many on the religious right did much the same by attacking (sometimes literally) abortion providers or the teaching of evolution in the schools. To use a phrase of British literary critic Terry Eagleton, in those years “micropolitics seem[ed] the order of the day”—somewhere during that time politics “shift[ed] from the transformative to the subversive.” What allowed that shift to happen, I’d say, was the notion that by addressing seemingly minor-scale points instead of major-scale ones, each might eventually achieve a major-scale victory—or to put it more succinctly, that by taking enough small bites they could eat the elephant.

Just as the Americans and the Soviets refused to send clouds of ICBMs at each other during the Cold War, and instead fought “proxy wars” from the jungles of Vietnam to the mountains of Afghanistan, during the 1980s and 1990s both American liberals and conservatives declined to put their chief warships to sea, and instead held them in port. But right at this point the two storylines—the story of the navy, the story of American politics—begin to converge. That’s because the story of why warships are obsolete is also a story about why that story has no application to politics whatever.

“What does that tell you,” Brecher rhetorically asks, “about the distinguished gentlemen with all the ribbons on their chests who’ve been standing up on … bridges looking like they know what they’re doing for the past 50 years?” Since all naval vessels are simply holes in the water once the shooting really starts, those gentlemen must be, he says, “either stupid or so sleazy they’re willing to make a career commanding ships they goddamn well know are floating coffins for thousands.” Similarly, what does that tell you about an American liberal left that supposedly stands up for the majority of Americans, yet has stood by while, for instance, wages have remained—as innumerable reports confirm—essentially the same for forty years? For while it is all well and good for conservatives to agree to keep their Bismarcks and Nimitzs in port, that sort of agreement does not have the same payout for those on the liberal left—as ought to be obvious to anyone with an ounce of sense.

To see why requires seeing what the two major vessels of American politics are. Named most succinctly by William Jennings Bryan at Chicago in 1896, they concern what Bryan said were the only “two ideas of government”: the first being the idea that, “if you just legislate to make the well-to-do prosperous, that their prosperity will leak through on those below,” and the “Democratic idea,” the idea “that if you legislate to make the masses prosperous their prosperity will find its way up and through every class that rests upon it.” These are the two arguments that are effectively akin to the Enterprise: arguments at the very largest of scales, capable of surviving voyages to strange new worlds—because they apply as well to the twenty-third century of the Federation as they did to Bryan’s nineteenth. But that’s also what makes them different from any real battleship: unlike the Enterprise, they can’t be taken down no matter how many attack them.

There is, however, another way in which ideas can resemble warships: both molder in port. That’s one reason why, to speak of naval battles, the French lost the Battle of Trafalgar in 1805: as Wikipedia reports, because the “main French ships-of-the-line had been kept in harbour for years by the British blockade,” therefore the “French crews included few experienced sailors and, as most of the crew had to be taught the elements of seamanship on the few occasions when they got to sea, gunnery was neglected.” It’s perfectly alright to stay in port, in other words, if you are merely protecting the status quo—the virtues of wasting time with minor issues is sure enough if keeping things as they are is the goal. But that’s just the danger from the other point of view: the more time in port, the less able in battle—and certainly the history of the past several generations shows that supposed liberal or left-types have been increasingly unwilling to take what Bryan called the “Democratic idea” out for a spin.

Undoubtedly, in other words, American conservatives have relished observing left-wing graduate students in the humanities debate—to use some topics Eagleton suggests—“the insatiability of desire, the inescapability of the metaphysical … [and] the indeterminate effects of political action.” But what might actually affect political change in the United States, assuming anyone is still interested in the outcome and not what it means in terms of career, is a plain, easily-readable description of how that might be accomplished. It’s too bad that the mandarin admirals in charge of liberal politics these days appear to think that such a notion is a place where no one has gone before.

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.

 

 

 

 

Our Game

truck with battle flag and bumper stickers
Pick-up truck with Confederate battle flag.

 

[Baseball] is our game: the American game … [it] belongs as much to our institutions, fits into them as significantly, as our constitutions, laws: is just as important in the sum total of our historic life.
—Walt Whitman. April, 1889.

The 2015 Chicago Cubs are now a memory, yet while they lived nearly all of Chicago was enthralled—not least because of the supposed prophesy of a movie starring a noted Canadian. For this White Sox fan, the enterprise reeked of the phony nostalgia baseball has become enveloped by, of the sort sportswriters like to invoke whenever they, for instance, quote Walt Whitman’s remark that baseball “is our game: the American game.” Yet even while, to their fans, this year’s Cubs were a time machine to what many envisioned as a simpler, and perhaps better, America—much as the truck pictured may be such a kind of DeLorean to its driver—in point of fact the team’s success was built upon precisely the kind of hatred of tradition that was the reason why Whitman thought baseball was “America’s game”: baseball, Whitman said, had “the snap, go, fling of the American character.” It’s for that reason, perhaps, that the 2015 Chicago Cubs may yet prove a watershed edition of the Lovable Losers: they might prove not only the return of the Cubs to the elite of the National League, but also the resurgence of a type of thinking that was of the vanguard in Whitman’s time and—like World Series appearances for the North Siders—of rare vintage since. It’s a resurgence that may, in a year of Donald Trump, prove far more important than the victories of baseball teams, no matter how lovable.

That, to say the least, is an ambitious thesis: the rise of the Cubs signifies little but that their new owners possess a lot of money, some might reply. But the Cubs’ return to importance was undoubtedly caused by the team’s adherence, led by former Boston general manager Theo Epstein, to the principles of what’s been called the “analytical revolution.” It’s a distinction that was made clear during the divisional series against the hated St. Louis Cardinals: whereas, for example, St. Louis manager Matt Matheny asserted, regarding how baseball managers ought to handle their pitching staff,  that managers “first and foremost have to trust our gut,” the Cubs’ Joe Maddon (as I wrote about in a previous post) spent his entire season doing such things as batting his pitcher eighth, on the grounds that statistical analysis showed that by doing so his team gained a nearly-infinitesimal edge. (Cf. “Why Joe Maddon bats the pitcher eighth” ESPN.com)

Since the Cubs hired former Boston Red Sox general manager Theo Epstein, few franchises in baseball have been as devoted to what is known as the “sabermetric” approach. When the Cubs hired him, Epstein was well-known for “using statistical evidence”—as the New Yorker’s Ben McGrath put it a year before Epstein’s previous team, the Boston Red Sox, overcame their own near-century of futility in 2004—rather than relying upon what Epstein’s hero, the storied Bill James, has called “baseball’s Kilimanjaro of repeated legend and legerdemain”—the sort embodied by the Cardinals’ Matheny apparent reliance on seat-of-the-pants judgement.

Yet, while Bill James’ sort of thinking may be astonishingly new to baseball’s old guard, it would have been old hat to Whitman, who had the example of another Bill James directly in front of him. To follow the sabermetric approach after all requires believing (as the American philosopher William James did according to the Internet Encyclopedia of Philosophy), “that every event is caused and that the world as a whole is rationally intelligible”—an approach that not only would Whitman have understood, but applauded.

Such at least was the argument of the late American philosopher Richard Rorty, whose lifework was devoted to preserving the legacy of late nineteenth and early twentieth century writers like Whitman and James. To Rorty, both of those earlier men subscribed to a kind of belief in America rarely seen today: both implicitly believed in what James’ follower John Dewey would call “the philosophy of democracy,” in which “both pragmatism and America are expressions of a hopeful, melioristic, experimental frame of mind.” It’s in that sense, Rorty argued, that William James’ famous assertion that “the true is only the expedient in our way of thinking” ought to be understood: what James meant by lines like this was that what we call “truth” ought to be tested against reality in the same way that scientists test their ideas about the world via experiments instead of relying upon “guts.”

Such a frame of mind however has been out of fashion in academia since at least the 1940s, Rorty often noted: for example, as early as the 1940s Robert Hutchins and Mortimer Adler of the University of Chicago were reviling the philosophy of Dewey and James as “vulgar, ‘relativistic,’ and self-refuting.” To say, as James did say, “that truth is what works” was—according to thinkers like Hutchins and Adler—“to reduce the quest for truth to the quest for power.” To put it another way, Hutchins and Adler provided the Ur Example of what’s become known as Godwin’s Law: the idea that, sooner or later, every debater will eventually claim that the opponent’s position logically ends up at Nazism.

Such thinking is by no means extinct in academia: indeed, in many ways Rorty’s work at the end of his life was involved in demonstrating how the sorts of arguments Hutchins and Adler enlisted for their conservative politics had become the very lifeblood of those supposedly opposed to the conservative position. That’s why, to those whom Rorty called the “Unpatriotic Academy,” the above picture—taken at a gas station just over the Ohio River in southern Indiana—will be confirmation of the view of the United States held by those who “find pride in American citizenship impossible,” and “associate American patriotism with an endorsement of atrocities”: to such people, America and science are more or less the same thing as the kind of nearly-explicit racism demonstrated in the photograph of the truck.

The problem with those sorts of arguments, Rorty wanted to claim in return, was that it is all-too willing to take the views of some conservative Americans at face value: the view that, for instance, “America is a Christian country.” That sentence is remarkable precisely because it is not taken from the rantings of some Southern fundamentalist preacher or Republican candidate, but rather is the opening sentence of an article by the novelist and essayist Marilynne Robinson in, of all places, the New York Review of Books. That it could appear so, I think Rorty would have said, shows just how much today’s academia really shares the views of its supposed opponents.

Yet, as Rorty was always arguing, the ideas held by the pragmatists are not so easily characterized as mere American jingoism as the many critics of Dewey and James and the rest would like to portray them as—nor is “America” so easily conflated with simple racism. That is because the arguments of the American pragmatists were (arguably) simply a restatement of a set of ideas held by a man who lived long before North America was even added to the world’s geography: a man known to history as Ibn Khaldun, who was born in Tunis on Africa’s Mediterranean coastline in the year 1332 of the Western calendar.

Khaldun’s views of history, as set out by his book Muqaddimah (“Introduction,” often known by its Greek title, Prolegemena), can be seen as the forerunners of the ideas of John Dewey and William James, as well as the ideas of Bill James and the front office of the Chicago Cubs. According to a short one-page biography of the Arab thinker by one “Dr. A. Zahoor,” for example, Khaldun believed that writing history required such things as “relating events to each other through cause and effect”—much as both men named William James believe[d] that baseball events are not inexplicable. As Khaldun himself wrote:

The rule for distinguishing what is true from what is false in history is based on its possibility or impossibility: That is to say, we must examine human society and discriminate between the characteristics which are essential and inherent in its nature and those which are accidental and need not be taken into account, recognizing further those which cannot possibly belong to it. If we do this, we have a rule for separating historical truth from error by means of demonstrative methods that admits of no doubt.

This statement is, I think, hardly distinguishable from what the pragmatists or the sabermetricians are after: the discovery of what Khaldun calls “those phenomena [that] were not the outcome of chance, but were controlled by laws of their own.” In just the same way that Bill James and his followers wish to discover things like when, if ever, it is permissible or even advisable to attempt to steal a base, or lay down a bunt (both, he says, are more often inadvisable strategies, precisely on the grounds that employing them leaves too much to chance), Khaldun wishes to discover ways to identify ideal strategies in a wider realm.

Assuming then that we could say that Dewey and James were right to claim that such ideas ought to be one and the same as the idea of “America,” then we could say that Ibn Khaldun, if not the first, was certainly one of the first Americans—that is, one of the first to believe in those ideas we would later come to call “America.” That Khaldun was entirely ignorant of such places as southern Indiana should, by these lights, no more count against his Americanness than Donald Trump’s ignorance of more than geography ought to count against his. Indeed, conducted according to this scale, it should be no contest as to which—between Donald Trump, Marilynn Robinson, and Ibn Khaldun—is the the more likely to be a baseball fan. Nor, need it be added, which the better American.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.