A Part of the Main

We may be confident that the Great American Poem will not be written, no matter what genius attempts it, until democracy, the idea of our day and nation and race, has agonized and conquered through centuries, and made its work secure.

But the Great American Novel—the picture of the ordinary emotions and manners of American existence … will, we suppose, be possible earlier.
—John William De Forest. “The Great American Novel.” The Nation 9 January 1868.

Things refuse to be mismanaged long.
—Theodore Parker. “Of Justice and the Conscience.” 1853.

 

“It was,” begins Chapter Seven of The Great Gatsby, “when curiosity about Gatsby was at its highest that the lights in his house failed to go on one Saturday night—and, as obscurely as it began, his career as Trimalchio was over.” Trimalchio is a character in the ancient Roman novel The Satyricon who, like Gatsby, throws enormous and extravagant parties; there’s a lot that could be said about the two novels compared, and some of it has been said by scholars. The problem with comparing the two novels however is that, unlike Gatsby, The Satyricon is “unfinished”: we today have only the 141, not-always-continguous chapters collated by 17th century editors from two medieval manuscript copies, which are clearly not the entire book. Hence, comparing The Satyricon to Gatsby, or to any other novel, is always handicapped by the fact that, as the Wikipedia page continues, “its true length cannot be known.” Yet, is it really true that estimating a message’s total length based only on a part of the whole is impossible? Contrary to the collective wisdom of classical scholars and Wikipedia contributors, it isn’t, which we know due to techniques developed at the behest of a megalomaniac Trimalchio convinced Shakespeare was not Shakespeare—work that eventually become the foundation of the National Security Agency.  

Before getting to the history of those techniques, however, it might be best to describe first what they are. Essentially, the problem of figuring out the actual length of The Satyricon is a problem of sampling: that is, of estimating whether you have, like Christopher Columbus, run up on an island—or, like John Cabot, smacked into a continent. In biology, for instance, a researcher might count the number of organisms in a given area, then extrapolate for the entire area. Another biological technique is to capture and tag or mark some animals in an area, then recapture the same number of animals in the same area some time later—the number of re-captured previously-tagged animals provides a ratio useful for estimating the true size of the population. (The fewer the numbers of re-captured, the larger the size of the total population.) Or, as the baseball writer Bill James did earlier this year on his website (in “Red Hot Start,” from 16 April), of forecasting the final record of a baseball team based upon its start: in this case, the “true underlying win percentage” of the Boston record given that the team’s record in its first fifteen games was 13-2. The way that James did it is, perhaps, instructive about possible methods for determining the length of The Satyricon.

James begins by noting that because the “probability that a .500 team would go 13-2 or better in a stretch of 15 games is  … one in 312,” while the “probability that a .600 team would go 13-2 in a stretch of 15 games is … one in 46,” it is therefore “much more likely that they are a .600 team than that they are a .500 team”—though with the caveat that, because “there are many more .500 teams than .600 teams,” this is not “EXACTLY true” (emp. James). Next, James finds the standard statistical measure called the standard deviation: that is, the amount by which actual team records distribute themselves around the .500 mark of 81-81. James finds this number for teams in the years 2000-2015 to be .070, a low number; meaning that most team records in that era bunched closely around .500. (By comparison, the historical standard deviation for “all [major league] teams in baseball history” is .102, meaning that there used to be a wider spread between first-place teams and last-place teams than there is now.) Finally, James arranges the possible records of baseball teams according to what mathematicians call the “Gaussian,” or “normal” distribution: that is, how team records would look were they to follow the familiar “bell-shaped” curve, familiar from basic statistical courses, in which most teams had .500 records and very few teams had either 100 wins—or 100 losses. 

If the records of actual baseball teams follow such a distribution, James finds that “in a population of 1,000 teams with a standard deviation of .070,” there should be 2 teams above .700, 4 teams with percentages from .675 to .700, 10 teams from .650 to .675, 21 teams from .625 to .650, and so on, down to 141 teams from .500 to .525. (These numbers are mirrored, in turn, by teams with losing records.) Obviously, teams with better final records have better chances of starting 13-2—but at the same time, there are a lot fewer teams with final records of .700 than there are of teams going .600. As James writes, it is “much more likely that a 13-2 team is actually a .650 to .675 team than that they are actually a .675 to .700 team—just because there are so many more teams” (i.e., 10 teams as compared to 4). So the chances of each level of the distribution producing a 13-2 team actually grows as we approach .500—until, James says, we approach a winning percentage of .550 to .575, where the number of teams finally gets outweighed by the quality of those teams. Whereas in a thousand teams there are 66 teams who might be expected to have winning percentages of .575 to .600, thereby meaning that it is likely that a bit better than one of those teams might have start 13-2 (1.171341 to be precise), the chance of one of the 97 teams starting at 13-2 is only 1.100297. Doing a bit more mathematics, which I won’t bore you with, James eventually concludes that it is most likely that the 2018 Boston Red Sox will finish the season with .585 winning percentage, which is between a 95-67 season and a 94-68 season. 

What, however, does all of this have to do with The Satyricon, much less with the National Security Agency? In the specific case of the Roman novel, James provides a model for how to go about estimating the total length of the now-lost complete work: a model that begins by figuring out what league Petronius is playing in, so to speak. In other words, we would have to know something about the distribution of the lengths of fictional works: do they tend to converge—i.e., have a low standard deviation—strongly on some average length, the way that baseball teams tend to converge around 81-81? Or, do they wander far afield, so that the standard deviation is high? The author(s) of the Wikipedia article appear to believe that this is impossible, or nearly so; as the Stanford literary scholar Franco Moretti notes, when he says that he works “on West European narrative between 1790 and 1930,” he “already feel[s] like a charlatan” because he only works “on its canonical fraction, which is not even one percent of published literature.” There are, Moretti observes for instance, “thirty thousand nineteenth-century British novels out there”—or are there forty, or fifty, or sixty? “[N]o one really knows,” he concludes—which is not even to consider the “French novels, Chinese, Argentinian, [or] American” ones. But to compare The Satyricon to all novels would be to accept a high standard deviation—and hence a fairly wide range of possible lengths. 

Alternately, The Satyricon could be compared only to its ancient comrades and competitors: the five ancient Greek novels that survive complete from antiquity, for example, along with the only Roman novel to survive complete—Apuleius’ The Metamorphoses. Obviously, were The Satyricon to be compared only to ancient novels (and of those, only the complete ones) the standard deviation would likely be higher, meaning that the lengths might cluster more tightly around the mean. That would thereby imply a tighter range of possible lengths—at the risk, since the six ancient novels could all differ in length from The Satyricon much more than all the novels written likely would, of making a greater error in the estimate. The choice of which set (all novels, ancient novels) to use thereby is the choice between a higher chance of being accurate, and a higher chance of being precise. Either way, Wikipedia’s claim that the length “cannot be known” is only so if the words “with absolute certainty” are added. The best guess we can make can either be nearly certain to contain the true length within it, or be nearly certain—if it is accurate at all—to be very close to the true length, which is to say that it is entirely possible that we could know what the true length of The Satyricon was, even if we were not certain that we did in fact know it. 

That then answers the question of how we could know the length of The Satyricon—but when I began this story I promised that I would (eventually) relate it to the foundations of the National Security Agency. Those, I mentioned, began with an eccentric millionaire convinced that William Shakespeare did not write the plays that now bear his name. The millionaire’s name was George Fabyan; in the early 20th century he brought together a number of researchers in the new field of cryptography in order to “prove” Fabyan’s pet theory that Francis Bacon was the true author of the Bard’s work Bacon having been known as the inventor of the code system that bears his name; Fabyan thusly subscribed to the proposition that Bacon had concealed the fact of his authorship by means of coded messages within the plays themselves. The first professional American codebreakers thereby found themselves employed on Fabyan’s 350-acre estate (“Riverbank”) on the Fox River just south of Geneva, Illinois, which is still there today—and where American military minds found them on the American entry into World War One in 1917. 

Specifically, they found Elizabeth Smith and William Friedman (who would later marry). During the war the couple helped to train several federal employees in the art of codebreaking. By 1921, they had been hired away by the War Department, which then led to spending the 1920s breaking the codes of gangsters smuggling liquor into the dry United States in the service of the Coast Guard. During World War Two, Elizabeth would be employed in breaking one of the Enigma codes used by the German Navy; meanwhile, her husband William had founded the Army’s Signal Intelligence Service—the outfit that broke the Imperial Japanese Navy’s “Purple” code (itself based on Enigma machines), and was the direct predecessor to the National Security Agency. William had also written the scientific papers that underlay their work; he had, in fact, even coined the word cryptanalysis itself.          

Central to Friedman’s work was something now called the “Friedman test,” but then called the “kappa test.” This test, like Bill James’ work, compared two probabilities: the first being the obvious probability of which letter a coded one is likely to be, which in English is in one in 26, or 0.0385. The second, however, was not so obvious, that being the chance that two randomly selected letters from a source text will turn out to be the same letter, which is known in English to be 0.067. Knowing those two points, plus how long the intercepted coded message is, allows the cryptographer to estimate the length of the key, the translation parameter that determines the output—just as James can calculate the likely final record of a team that starts 13-2 using two different probabilities. Figuring out the length of The Satyricon, then, might not be quite the Herculean task it’s been represented to be—which raises the question, why has it been represented that way? 

The answer to that question, it seems to me, has something to do with the status of the “humanities” themselves: using statistical techniques to estimate the length of The Satyricon would damage the “firewall” that preserves disciplines like Classics, or literary study generally, from the grubby no ’ccount hands of the sciences—a firewall, we are eternally reminded, necessary in order to foster what Geoffrey Harpham, former director of the National Institute for the Humanities, has called “the capacity to sympathize, empathize, or otherwise inhabit the experience of others” so “clearly essential to democratic citizenship.” That may be so—but it’s also true that maintaining that firewall allows law schools, as Sanford Levinson of the University of Texas remarked some time ago, to continue to emphasize “traditional, classical legal skills” at the expense of “‘finding out how the empirical world operates.’” And since that has allowed (in Gill v. Whitford) the U.S. Supreme Court the luxury of considering whether to ignore a statistical measure of gerrymandering, for example, while on the other hand it is quite sure that the disciplines known as the humanities collect students from wealthy backgrounds at a disproportionate rate, it perhaps ought to be wondered precisely in what way those disciplines are “essential to democratic citizenship”—or rather, what idea of “democracy” is really being preserved here. If so, then—perhaps using what Fitzgerald called “the dark fields of the republic”—the final record of the United States can quite easily be predicted.

Advertisements

Lex Majoris

The first principle of republicanism is that the lex majoris partis is the fundamental law of every society of individuals of equal rights; to consider the will of the society enounced by the majority of a single vote, as sacred as if unanimous, is the first of all lessons in importance, yet the last which is thoroughly learnt. This law once disregarded, there is no other but that of force, which ends necessarily in military despotism.
—Thomas Jefferson. Letter to Baron von Humboldt. 13 June 1817.

Since Hillary Clinton lost the 2016 American presidential election, many of her supporters have been quick to cry “racism” on the part of voters for her opponent, Donald Trump. According to Vox’s Jenée Desmond-Harris, for instance, Trump won the election “not despite but because he expressed unfiltered disdain toward racial and religious minorities in the country.” Aside from being the easier interpretation, because it allows Clinton voters to ignore the role their own economic choices may have played in the broad support Trump received throughout the country, such accusations are counterproductive even on their own terms because—only seemingly paradoxically—they reinforce many of the supports racism still receives in the United States: above all, because they weaken the intellectual argument for a national direct election for the presidency. By shouting “racism,” in other words, Hillary Clinton’s supporters may end up helping to continue racism’s institutional support.

That institutional support begins with the method by which Americans elect their president: the Electoral College—a method that, as many have noted, is not used in any other industrialized democracy. Although many scholars and others have advanced arguments for the existence of the college through the centuries, most of these “explanations” are, in fact, intellectually incoherent: while the most common of the traditional “explanations” concerns the differences between the “large states” and the “small,” for instance, in the actual United States—as James Madison, known as the “Father of the Constitution,” noted at the time—there had not then, and has not ever been since, a situation in American history that involved a conflict between larger-population and smaller-population states. Meanwhile, the other “explanations” for the Electoral College do not even rise to this level of incoherence.

In reality there is only one explanation for the existence of the college, and that explanation has been most forcefully and clearly made by law professor Paul Finkelman, now serving as a Senior Fellow at the University of Pennsylvania after spending much of his career at obscure law schools like the University of Tulsa College of Law, the Cleveland-Marshall College of Law, and the Albany Law School. As Finkelman has been arguing for decades (his first papers on the subject were written in the 1980s), the Electoral College was originally invented by the delegates to the Constitutional Convention of 1787 in order to protect slavery. That such was the purpose of the College can be known, most obviously, because the delegates to the convention said so.

When the means of electing a president were first debated, it’s important to remember that the convention had already decided, for the purposes of representation in the newly-created House of Representatives, to count black slaves by the means of the infamous three-fifths ratio. That ratio, in turn, had its effect when discussing the means of electing a president: delegates like James Madison argued, as Finkelman notes, that the existence of such a college—whose composition would be based on each state’s representation in the House of Representatives—would “guarantee that the nonvoting slaves could nevertheless influence the presidential election.” Or as Hugh Williamson, a delegate from North Carolina, observed during the convention, if American presidents were elected by direct national vote the South would be shut out of electing a national executive because “her slaves will have no suffrage”—that is, because in a direct vote all that would matter is the number of voters, the Southern states would lose the advantage the three-fifths ratio gave them in the House. Hence, the existence of the Electoral College is directly tied to the prior decision to grant Southern slave states an advantage in Congress, and so the Electoral College is another in a string of institutional decisions made by convention delegates to protect domestic slavery.

Yet, assuming that Finkelman’s case for the racism of the Electoral College is true, how can decrying the racism of the American voter somehow inflict harm on the case for abolishing the Electoral College? The answer goes back to the very justifications of, not only presidential elections, but elections in general—the gradual discovery, during the eighteenth century Enlightenment, of what is today known as the Law of Large Numbers.

Putting the law in capital letters, I admit, tends to mystify it, but anyone who buys insurance already understands the substance of the concept. As New Yorker writer Malcolm Gladwell once explained insurance, “the safest and most efficient way to provide insurance” is “to spread the costs and risks of benefits over the biggest and most diverse group possible.” In other words, the more people participating in an insurance plan, the greater the possibility that the plan’s members will be protected. The Law of Large Numbers explains why that is.

That reason is the same as the reason that, as Peter Bernstein remarks in Against the Gods: The Remarkable Story of Risk, if we toss a coin enough times that “will correspondingly increase the probability that the ratio of heads thrown to total throws” will decrease. Or, the reason that—as physicist Leonard Mlodinow has pointed out—in order really to tell which baseball team is better than another a World Series would have to be at least 23 games long (if one team were much better than the other), and possibly as long as 269 games (between two closely-matched opponents). Only by playing so many games can random chance be confidently excluded: as Carl Bialik of FiveThirtyEight once pointed out, usually “in sports, the longer the contest, the greater the chance that the favorite prevails.” Or, as Israeli psychologists Daniel Kahneman and Amos Tversky put the point in 1971, “the law of large numbers guarantees that very large samples will indeed be representative”: it’s what scientists rely upon to know that, if they have performed enough experiments or poured over enough data, they know enough to exclude idiosyncratic results. The Law of Large Numbers asserts, in short, that the more times we repeat something, the closer we will approach its true value.

It’s for just that reason that many have noted the connection between science and democratic government: “Science and democracy are powerful partners,” as the website for the Union of Concerned Scientists has put it. What makes these two objects such “powerful” partners is that the Law of Large Numbers is what underlies the act of holding elections: as James Surowiecki put the point in his book, The Wisdom of Crowds, the theory of democracy is that “the larger the group, the more reliable its judgment will be.” Just as scientists think that, by replicating an experiment, they can more readily trust in its results, so too does a democratic government implicitly think that, by including more people in the decision-making process, the government can the more readily arrive at the “correct” solution: as James Madison put it in The Federalist No. 10, if you “take in a greater variety of parties and interests,” then “you make it less probable that a majority of the whole will have a common motive for invading the rights of other citizens.” Without such a belief, after all, there would be no reason not to trust, say, a ruling caste to make decisions for society—or even a single, perhaps orange-toned, individual. Without some concept of the Law of Large Numbers—some belief that increasing the numbers of trials, or increasing the number of inputs, will make for better results—there is no reason for democratic government at all.

That’s why, when people criticize the Electoral College, they are implicitly invoking the Law of Large Numbers. The Electoral College divides the pool of American voters into fifty smaller pools, but a national popular vote would collect all Americans into a single lump—a point that some defenders of the College sometimes seek to make into a virtue, instead of the vice it is. In the wake of the 2000 election, for example, Senator Mitch McConnell wrote that the “Electoral College served to center the post-election battles in Florida,” preventing the “vote recounts and court battles in nearly every state of the Union” that, McConnell assures us, would have occurred in the college’s absence. But as Timothy Noah pointed out in The New Republic in 2012, what McConnell’s argument “fails to realize is that when you’re assembling one big count rather than a lot of little ones it’s a lot less clear what’s to be gained from rigging any of the little ones.” If what matters is the popular vote, what happens in any one location doesn’t matter so much; hence, stealing votes in downstate Illinois won’t allow you to steal the entire state—just as, with enough samples or experiments run, the fact that the lab assistant was drowsy at the time she recorded one set of results won’t matter so much. Or why deliberately losing a single game in July hardly matters so much as tanking a game of the World Series.

Put in such a way, it’s hard to see how anyone without a vested stake in the construction of the present system could defend the Electoral College—yet, as I suspect we are about to see, the very people now ascribing Donald Trump’s victory to the racism of the American voter will soon be doing just that. The reason will be precisely the same reason that such advocates want to blame racism, rather than the ongoing thievery of economic elites, for the rejection of Clinton: because racism is a “cultural” phenomenon, and most left-wing critics of the United States now obtain credentials in “cultural,” rather than scientific, disciplines.

If, in other words, Donald Trump’s victory was due to a complex series of renegotiations of the global contract between capital and labor, then that would require experts in economic and other, similar, disciplines to explain it; if his victory was due to racism, however—racism being considered a cultural phenomenon—then that will call forth experts in “cultural” fields. Because those with “liberal” or “leftist” political leanings now tend to gather in “cultural” fields, those with those political leanings will (indeed, must) now attempt to shift the battleground towards their areas of expertise. That shift, I would wager, will in turn lead those who argue for “cultural” explanations for the rise of Trump against arguments for the elimination of the Electoral College.

The reason is not difficult to understand: it isn’t too much to say, in fact, that one way to define the study of the humanities is to say it comprises the disciplines that largely ignore, or even oppose, the Law of Large Numbers both as a practical matter and as a philosophic one. As literary scholar Franco Moretti, now of Stanford, observed in his Atlas of the European Novel, 1800-1900, just as “silver fork novels”—a genre published in England between the 1820s and the 1840s—do not “show ‘London,’ but only a small, monochrome portion of it,” so too does the average student of literature not really study her ostensible subject matter. “I work on west European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain and France,” Moretti confesses in an essay entitled “Distant Reading”—and even then, he only works “on its canonical fraction, which is not even 1 percent of published literature.” As Joshua Rothman put the point in a New Yorker profile of Moretti a few years ago, Moretti instead insists that “if you really want to understand literature, you can’t just read a few books or poems over and over,” but instead “you have to work with hundreds or even thousands of texts at a time”—that is, he insists on the significance of the Law of Large Numbers in his field, an insistence whose very novelty demonstrates how literary study is a field that has historically resisted precisely that recognition.

In order to proceed, in other words, disciplines like literary study or art history—or even history itself—must argue for the representativeness of a given body of work: usually termed, at least in literary study, “the Canon.” Such disciplines are already, simply by their very nature, committed to the idea that it is not necessary to read all of what Moretti says is the “thirty thousand nineteenth-century British novels out there” in order to arrive at conclusions about the nineteenth-century British novel: in the first place, “no one really knows” how many there really are (there could easily be twice as many), and in the second “no one has read them [all], [and] no one ever will.” In order to get off the ground, such disciplines must necessarily deny the Law of Large Numbers: as Moretti says, “you invest so much in individual texts only if you think that very few of them really matter”—a belief with an obvious political corollary. Rejection of the Law of Large Numbers is thusly, as Moretti also observes, “an unconscious and invisible premiss” for most who study such fields—which is to say that although students of the humanities often make claims for the political utility of their work, they sometimes forget that the enabling presuppositions of their fields are inherently those of the pre-Enlightenment ancien régime.

Perhaps that’s why—as Joe Pinsker observed in a fascinating, but short, article for The Atlantic several years ago—studies of college students find that those “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” while students “whose parents make more money flock to history, English, and the performing arts”: the baseline assumptions of those disciplines are, no matter the particular predilections of a given instructor, essentially aristocratic, not democratic. To put it most baldly, the disciplines of the humanities must reject the premise of the Law of Large Numbers, which says that as more examples are added, the closer we approach to the truth—a point that can be directly witnessed when, for instance, English professor Michael Bérubé of Pennsylvania State University observes that the “humanists at [his] end of the [academic] hallway roundly dismissed” Harvard biologist E.O. Wilson’s book, Consilience: The Unity of Knowledge for arguing that “all human knowledge can and eventually will be unified under the rubric of the natural sciences.” Rejecting the Law of Large Numbers is foundational to the very operation of the humanities: without making that rejection, they cannot exist.

In recent decades, of course, presumably Franco Moretti has not been the only professor of the humanities to realize that their disciplines stood on a collision course with the Law of Large Numbers—it may perhaps explain why disciplines like literature and others have, for years, been actively recruiting among members of minority groups. The institutional motivations of such hiring, in other words, ought to be readily apparent: by making such hires, departments of the humanities could insulate themselves from charges from the political left—while at the same time continuing the practices that, without such cover, might have appeared increasingly anachronistic in a democratic age. Minority hiring, that is, may not be so politically “progressive” as its defenders sometimes argue: it may, in fact, have prevented the intellectual reforms within the humanities urged by people like Franco Moretti for a generation or more. Of course, by joining such departments, members of minority groups also may have, consciously or not, tied their own fortunes to a philosophic rejection of concepts like the Law of Large Numbers—as African-American sportswriter Michael Wilbon, of ESPN fame, wrote this past May, black people supposedly have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” I suspect then that many who claim to be on the political left will soon come out to defend the Electoral College. If that happens, then in one last cruel historical irony the final defenders of American slavery may end up being precisely those slavery meant to oppress.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.