Miracles Alone

They say miracles are past; and we have our
philosophical persons, to make modern and familiar, things supernatural and causeless.
All’s Well That Ends Well Act II, scene 3  

“If academic writing is to become expansive again,” wrote Joshua Rothman in The New Yorker a year ago, in one of the more Marxist sentences to appear in a mainstream publication lately, “academia will probably have to expand first.” What Rothman is referring to was the minor controversy set off by a piece by Nicholas Kristof in the New York Times entitled “Professors, We Need You!”—a rant attacking the “unintelligibility” of contemporary academic writing blah blah blah. Rothman’s take on the business—as a former graduate student himself—is that the increasing obscurity of the superstructure of academic writing is the result of an ever-smaller base: “the audience for academic work has been shrinking,” he says, and so building “a successful academic career” requires “serially impress[ing] very small groups of people,” like journal editors, hiring committees, etc. So, to Rothman, turning academic writing around would mean an expanding university system: that is, one in which it wasn’t terribly difficult to get a job. To put it another way, it’s to say that in order to make academics visible to the people, it would probably help to allow the people to become academics.

To very many current academics, however, that’s precisely off the table, because their work involves questioning the assumption necessary to power Rothman’s whole proposal: to write for large numbers of people requires the writing not to need some enormous amount of training in order to be read. A lot of academics in today’s humanities departments would “historicize” that assumption by saying that it only came into being with the Protestant Reformation at the beginning of the modern era, which held that the Bible could be read, and understood, by anyone—not just a carefully chosen set of acolytes capable of translating the holy mysteries to the laity, as in Roman Catholic practice. Academics of this sort might then make reference, as Benedict Anderson did in his Imagined Communities, to “print capitalism”—how the growth of newspapers and other printed materials demonstrated how writing untethered from a clerical caste could generate huge profits. And so on.

The defenses of obscure and difficult writing offered by such academics as Judith Butler, however, do not always take that turn: very often, difficult writing is defended on the grounds that such esoteric kinds of efforts “can help point the way to a more socially just world,” because “language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.” That, one supposes, might be true—and it’s certainly true that what’s known as the “cultural left” has, as the philosopher Richard Rorty once remarked, made all of us more sensitive to the peculiar ways in which language can influence the ways in which people perceive other people. But it’s also true that such a kind of thinking fails to think through the entire meaning of standing against intelligibility.

Most obviously, though this point is often obscured, it means standing against the idea of what is known as the doctrine of “naturalism,” a notion defined by the Stanford Encyclopedia of Philosophy as “asserting that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity.” At least since Mark Twain adopted naturalism to literature by saying that “the personages of a tale shall confine themselves to possibilities and let miracles alone,” a baseline belief in naturalism has been what created the kind of widely literate public Kristof’s piece requires. Mysteries, that is, can only be understood by someone initiated into them: hence, to proceed without initiates requires outlawing mystery.

As should be obvious but apparently isn’t, it’s only absent a belief in mystery that anyone could, in Richard Rorty’s words, “think of American citizenship as an opportunity for action”—rather than, as Rorty laments so much of this so-called “cultural left” has become, possessed by the “spirit of detached spectatorship.” Difficult writing, in other words, might be able to do something for small groups, but it cannot, by definition, help larger ones—which is to say that it is probably no accident that Judith Butler should have left just what she meant by “socially just” undefined, because by the logic of her argument it almost certainly does not include the vast majority of America’s, or the world’s, people.

“In the early decades of” the twentieth century, Richard Rorty once wrote, “when an intellectual stepped back from his or her country’s history and looked at it through skeptical eyes, the chances were that he or she was about to propose a new political initiative.” That tradition is, it seems, nearly lost: today’s “academic Left,” Rorty wrote then, “has no projects to propose to America, no vision of a country to be achieve by building a consensus on the need for specific reforms.” For Rorty, however, that seems blamable on the intellectuals themselves—a kind of “blaming the victim” or traison des clercs that is itself a betrayal of the insights of naturalism: according to those notions, it’s no more possible that large numbers of smart people should have inexplicably given up on their political efforts completely than a flaming shrubbery could talk.

It’s that possibility that the British literary critic Terry Eagleton appears to have considered when, in his The Illusions of Postmodernism, he suggests that the gesture of denying that “there is any significant distinction between discourse and reality”—a denial specifically aimed at naturalism’s attempt to rule out the mysterious—may owe more to “the deadlocked political situation of a highly specific corner of the globe” than it does to the failures of the intellectuals. What I presume Eagleton is talking about is what Eric Alterman, writing in The Atlantic, called “the conundrum of a system that, as currently constructed, gives the minority party no strategic stake in sensible governance.” Very many of the features of today’s American government, that is, are designed not to produce good government, but rather to enable a minority to obstruct the doings of the majority—the famous “checks and balances.”

While American civic discourse often celebrates those supposed features, as I’ve written before the work of historians like Manisha Sinha and Leonard Richards shows that in fact they are due, not to the foresight of the Founding Fathers, but instead in order to protect the richest minority of the then-newborn republic: the slaveowners. It isn’t any accident that, as Alterman says, it “has become easier and easier for a determined minority to throw sand in the gears of the legislative process”: the very structure of the Senate, for example, allows “the forty Republican senators … [who] represent barely a third of the US population” to block any legislation, even excluding the more obscure senatorial tools, like the filibuster and the hold. These devices, as the work of historians shows, were originally developed in order to protect slavery; as Lawrence Goldstone put the point in the New Republic recently, during the Constitutional Convention of 1787, “slaveholders won a series of concessions,” among them “the makeup of the Senate” and the method of electing a president. These hangovers linger on, defending interests perhaps less obviously evil than the owners of slaves, but interests by and large not identical with those of the average citizen: today, those features are all check and no balance.

Such an explanation, I think, is more likely than Rorty’s stance of casting blame on people like Judith Butler, as odious as her beliefs really are. It might explain better how for instance, as the writer Seymour Krim described in his essay, “The American Novel Made Me,” intellectuals began “in the mid 50s [1950s] to regard the novel as a used-up medium,” so that the “same apocalyptic sense of possibility that we once felt in the U.S. novel now went into its examination”: what Krim calls “the game” of “literary criticism.” In that game, what matters isn’t the description of reality itself, but rather the methods of description by which “reality” is recorded: in line with Rorty’s idea of the intellectual turn against reality, not so much the photograph so much as the inner workings of the camera. Yet while that pursuit might appear to  some as a ridiculous and objectively harmful pursuit, blaming people, even smart people, for having become involved in such efforts because you have blocked their real path to advancement is like blaming butter for melting in the sun.

What all of this may show, in other words, is that for academic writing to become expansive again, as Joshua Rothman wishes, it may require far more than just academia to expand, though almost certainly that may be part of it. What it will also require is a new band of writers and politicians, recommitted to the tenets of naturalism and determined, as Krim said about “the American realistic novel of the mid to late 1930s,” to be “‘truthful’ in recreating American life.” To Kristof or Rothman, that’s a task unlikely even to be undertaken in our lifetimes, much less accomplished. Yet it ought to be acknowledged that Kristof and Rothman’s own efforts imply that a hunger exists that may not know its name—that a wanderer is abroad, holding aloft a lantern flickering not because of a rising darkness, but an onrushing dawn.

 

Advertisements

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.        

For Miracles Are Ceased

Turn him to any cause of policy,
The Gordian knot of it he will unloose …
Henry V

 

For connoisseurs of Schadenfreude, one of the most entertaining diversions of the past half-century or so is the turf war fought out in the universities between the sciences and the humanities now that, as novelist R. Scott Bakker has written, “at long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities.” A lot of what’s happened in the humanities since the 1960s—the “canon wars,” the popularization of Continental philosophy, the establishment of various sorts of “studies”—could be described as a disciplinary battle with the sciences, and not the “political” war that it is often advertised as; under that description, the vaunted outreach of the humanities to previously-underserved populations stops looking entirely so noble and more like the efforts, a century ago, of robber baron industrialists to employ minority scabs against striking workers. It’s a comparison in fact that is not only not meant flippantly, but suggests that the history of the academy since the 1960s stops looking like the glorious march towards inclusion its proponents sometimes portray it as—and rather more like the initial moves of an ideological war designed to lay the foundation for the impoverishment of all America.

According to University of Illinois at Chicago professor of literature Walter Benn Michaels, after all, today’s humanistic academy has largely become the “human resources department of neoliberalism.” Michaels’ work suggests, in fact, that the “real” purpose of the professoriate promoting the interests of women and minorities has not been for the sheer justice of the cause, but rather to preserve their own antiquated and possibly ridiculous methods of “scholarship.” But that bargain however—if there was one—may perhaps be said to have had unintended consequences: among them, the reality that some CEOs enjoy pay thousands of times that of the average worker.

Correlation is not causation, of course, but it does seem inarguable that, as former Secretary of Labor Robert Reich wrote recently in Salon, Americans have forgotten the central historical lesson of the twentieth century: that a nation’s health (and not just its economic health) depends on consumer demand. As Reich wrote, contrary to those who argue in favor of some form of “trickle down” economics, “America’s real job creators are consumers, whose rising wages generate jobs and growth.” When workers get raises, they have “enough purchasing power to buy what expanding businesses [have] to offer.” In short (pardon, Secretary Reich), “broadly shared prosperity isn’t just compatible with a healthy economy that benefits everyone—it’s essential to it.” But Americans have, it seems, forgotten that lesson: as many, many observers have demonstrated, American wages have largely been stagnant since the early 1970s.

Still, that doesn’t mean the academy is entirely to blame: for the most part, it’s only because of the work of academics that the fact of falling wages is known to any certainty—though it’s also fair to say that the evidence can be gathered by a passing acquaintance with reality. Yet it’s also true that, as New York University professor of physics Alan Sokal averred some two decades ago, much of the work of the humanities since the 1960s has been devoted towards undermining, in the name of one liberatory vision or another, the “stodgy” belief “that there exists an external world, [and] that there exist objective truths about it.” Such work has arguably had a version of the political effect often bombastically claimed for it—undoubtedly, there are many more people from previously unrepresented groups in positions of authority throughout American society today than there were before.

Yet, as the Marxist scholars often derided by their “postmodernist” successors knew—and those successors appear to ignore—every advance has its cost, and interpreted dialectically the turn of the humanities away from scientific naturalism has two possible motives: the first, as mentioned, the possibility that territory once the exclusive province of the humanities has been invaded by the sciences, and that much of the behavior of professors of the humanities can be explained by fear that “the traditional humanities are about to be systematically debunked” by what Bakker calls “the tremendous, scientifically-mediated transformations to come.” In the wake of the “ongoing biomechanical renovation of the human,” Bakker says, it’s become a serious question whether “the idiom of the humanities can retain cognitive legitimacy.” If Bakker’s suggestion is correct, then the flight of the humanities from the sciences can be interpreted as something akin to the resistance of old-fashioned surgeons to the practice of washing their hands.

There is, however, another possible interpretation: one that accounts for the similarity between the statistical evidence of rising inequality since the 1970s gathered by many studies and the evidence in favor of the existence of global warming—a comparison not made lightly. In regards to both, there’s a case to be made that many of the anti-naturalistic doctrines developed in the academy have conspired with the mainstream media’s tendency to ignore reality to prevent, rather than aid, political responses—a conspiracy that itself is only encouraged by the current constitutional structure of the American state, which according to some academic historians (of the non-“postmodern” sort) was originally designed with precisely the intention of both ignoring and preventing action about another kind of overwhelming, but studiously ignored, reality.

In early March, 1860, not-yet presidential candidate Abraham Lincoln addressed an audience at New Haven, Connecticut; “the question of Slavery,” he said during that speech, “is the question, the all absorbing topic of the day.” Yet it was also the case, Lincoln observed, that while in private this was the single topic of many conversations, in public it was taboo: according to slavery’s defenders, Lincoln said, opponents of slavery “must not call it wrong in the Free States, because it is not there, and we must not call it wrong in the Slave States because it is there,” while at the same time it should not be called “wrong in politics because that is bringing morality into politics,” and also that it should not be called “wrong in the pulpit because that is bringing politics into religion.” In this way, even as slavery’s defenders could admit that slavery was wrong, they could also deny that there was any “single place … where this wrong thing can properly be called wrong!” Thus, despite the fact that slavery was of towering importance it was also to be disregarded.

There were, of course, entirely naturalistic reasons for that premeditated silence: as documented by scholars like Leonard Richards and Garry Wills, the structure of American government itself is due to a bargain between the free and the slave states—a bargain that essentially ceded control of the federal machinery to the South in exchange for their cooperation. The evidence is compelling: “between Washington’s election and the Compromise of 1850,” as Richards has noted for example, “slaveholders controlled the presidency for fifty years, the Speaker [of the House]’s chair for forty-one years, and the chairmanship of House Ways and Means [the committee that controls the federal budget] for forty-two years.” By controlling such key offices, according to these scholars, slaveowners could prevent the federal government from taking any action detrimental to their interests.

The continuing existence of structures originally designed to ensure Southern control—among them the Supreme Court and the Senate, institutions well-known to constitutional scholars for being offerings to society’s “aristocratic” interests even if the precise nature of that interest is never explicitly identified as such—even beyond the existence of slavery, in turn, may perhaps explain, naturalistically, the relative failure of naturalistic, scientific thinking in the humanities over the past several decades—even as the public need for such thinking has only increased. Such, at least, is what might be termed the “positive” interpretation of humanistic antagonism toward science: not so much an interested resistance to progress but instead a principled reaction to a continuing drag on not just the political interests of Americans, but perhaps even to the progress of knowledge and truth itself.

What’s perhaps odd, to be sure, is that no one from the humanities has dared to make this case publicly—excluding only a handful of historians and law professors, most of them far from the scholarly centers of excitement. On the contrary, jobs in the humanities generally go to people who urge, like European lecturer in art history and sociology Anselm Joppe, some version of a “radical separation from the world of politics and its institutions of representation and delegation,” and ridicule those who “still flock to the ballot box”—often connected, as Joppe’s proposals are, to a ban on television and an opposition to both genetically modified food and infrastructure investment. Still, even when—as Richards and Wills and others have—academics have made their case in a responsible way, none has connected that struggle to the larger issues of the humanities generally. Of course, to make such connections—to make such a case—would require such professors to climb down from the ivory tower that is precisely the perch that enables them to do the sort of thinking that I have attempted to present here, inevitably exhibiting innumerable, and perhaps insuperable, difficulties. Yet, without such attempts, it’s difficult to see how either the sciences or the humanities can be preserved—to speak nothing of the continuing existence of the United States.

Still, there is one “positive” possibility: if none of them do, then the opportunities for Schadenfreude will become nearly limitless.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.