Baal

Just as ancient Greek and Roman propagandists insisted, the Carthaginians did kill their own infant children, burying them with sacrificed animals and ritual inscriptions in special cemeteries to give thanks for favours from the gods, according to a new study.
The Guardian, 21 January 2014.

 

Just after the last body fell, at three seconds after 9:40 on the morning of 14 December, the debate began: it was about, as it always is, whether Americans ought to follow sensible rules about guns—or whether they ought to be easier to obtain than, say, the right to pull fish out of the nearby Housatonic River. There’s been a lot of words written about the Sandy Hook killings since the day that Adam Lanza—the last body to fall—killed 20 children and six adults at the elementary school he once attended, but few of them have examined the culpability of some of the very last people one might expect with regard to the killings: the denizens of the nation’s universities. After all, it’s difficult to accuse people who themselves are largely in favor of gun control of aiding and abetting the National Rifle Association—Pew Research reported, in 2011, that more than half of people with more than a college degree favored gun control. And yet, over the past several generations a doctrine has gained ground that, I think, has not only allowed academics to absolve themselves of engaging in debate on the subject of gun control, but has actively harmed the possibility of accomplishing it.

Having said that, of course, it is important to acknowledge that virtually all academics—even those who consider themselves “conservative” politically—are in favor of gun control: when for example Texas passed a law legalizing carrying guns on college campus recently Daniel S. Hamermesh, a University of Texas emeritus professor of economics (not exactly a discipline known for its radicalism), resigned his position, citing a fear for his own and his students’ safety. That’s not likely accidental, because not only do many academics oppose guns in their capacities as citizens, but academics have a special concern when it comes to guns: as Firmin DeBrabander, a professor of philosophy at the Maryland Institute College of Art argued in the pages of Inside Higher Ed last year, against laws similar to Texas’, “guns stand opposed” to the “pedagogical goals of the classroom” because while in the classroom “individuals learn to talk to people of different backgrounds and perspectives,” guns “announce, and transmit, suspicion and hostility.” If anyone has a particular interest in controlling arms, in other words, it’s academics, being as their work is particularly designed to foster what DeBrabander calls “open and transformative exchange” that may air “ideas [that] are offensive.” So to think that academics may in fact be an obstacle towards achieving sensible policies regarding guns might appear ridiculous on the surface.

Yet there’s actually good reason to think that academic liberals bear some responsibility for the United States’ inability to regulate guns like every other industrialized—I nearly said, “civilized”—nation on earth. That’s because changing gun laws would require a specific demands for action, and as political science professor Adolph Reed, Jr. of the University of Pennsylvania put the point not long ago in Harper’s, these days the “left has no particular place it wants to go.” That is, to many on campus and off, making specific demands of the political sphere is itself a kind of concession—or in other words, as journalist Thomas Frank remarked a few years ago about the Occupy Wall Street movement, today’s academic left teaches that “demands [are] a fetish object of literal-minded media types who stupidly crave hierarchy and chains of command.” Demanding changes to gun laws is, after all, a specific demand, and to make specific demands is, from this sophisticated perspective, a kind of “sell out.”

Still, how did the idea of making specific demands become a derided form of politics? After all, the labor movement (the eight-hour day), the suffragette movement (women’s right to vote) or the civil rights movement (an end to Jim Crow) all made specific demands. How then has American politics arrived at the diffuse and essentially inarticulable argument of the Occupy movement—a movement within which, Elizabeth Jacobs claimed in a report for the Brookings Institute while the camp in Zuccotti Park still existed, “the lack of demands is a point of pride?” I’d suggest that one possible way the trick was turned was through a 1967 article written by one Robert Bellah, of Harvard: an article that described American politics, and its political system, as a “civil religion.” By describing American politics in religious rather than secular terms, Bellah opened the way towards what some have termed the “non-politics” of Occupy and other social movements—and incidentally, allow children like Adam Lanza’s victims to die.

In “Civil Religion in America,” Bellah—who received his bachelor’s from Harvard in 1950, and then taught at Harvard until moving to the University of California at Berkeley in 1967, where he continued until the end of his illustrious career—argued that “few have realized that there actually exists alongside of and rather clearly differentiated from the churches an elaborate and well-institutionalized civil religion in America.” This “national cult,” as Bellah terms it, has its own holidays: Thanksgiving Day, Bellah says, “serves to integrate the family into the civil religion,” while “Memorial Day has acted to integrate the local community into the national cult.” Bellah also remarks that the “public school system serves as a particularly important context for the cultic celebration of the civil rituals” (a remark that, incidentally, perhaps has played no little role in the attacks on public education over the past several decades). Bellah also argues that various speeches by American presidents like Abraham Lincoln and John F. Kennedy are also examples of this “civil religion” in action: Bellah spends particular time with Lincoln’s Gettysburg Address, which he notes that poet Robert Lowell observed is filled with Christian imagery, and constitutes “a symbolic and sacramental act.” In saying so, Bellah is merely following a longstanding tradition regarding both Lincoln and the Gettysburg Address—a tradition that, however, that does not have the political valence that Bellah, or his literal spiritual followers, might think it does.

“Some think, to this day,” wrote Garry Wills of Northwestern University in his magisterial Lincoln at Gettysburg: The Words that Remade America, “that Lincoln did not really have arguments for union, just a kind of mystical attachment to it.” It’s a tradition that Wills says “was the charge of Southerners” against Lincoln at the time: after the war, Wills notes, Alexander Stephens—the only vice president the Confederate States ever had—argued that the “Union, with him [Lincoln], in sentiment rose to the sublimity of a religious mysticism.” Still, it’s also true that others felt similarly: Wills points out that the poet Walt Whitman wrote that “the only thing like passion or infatuation” in Lincoln “was the passion for the Union of these states.” Nevertheless, it’s a dispute that might have fallen by the historical wayside if it weren’t for the work of literary critic Edmund Wilson, who called his essay on Lincoln (collected in a relatively famous book Patriotic Gore: Studies in the Literature of the American Civil War) “The Union as Religious Mysticism.” That book, published in 1962, seems to have at least influenced Lowell—the two were, if not friends, at least part of the same New York City literary scene—and through Lowell Bellah, seems plausible.

Even if there was no direct route from Wilson to Bellah, however, it seems indisputable that the notion—taken from Southerners—concerning the religious nature of Lincoln’s arguments for the American Union became widely transmitted through American culture. Richard Nixon’s speechwriter, William Safire—since a longtime columnist for the New York Times—was familiar with Wilson’s ideas: as Mark Neely observed in his The Fate of Liberty: Abraham Lincoln and the Civil Liberties, on two occasions in Safire’s novel Freedom, “characters comment on the curiously ‘mystical’ nature of Lincoln’s attachment to the Union.” In 1964, the theologian Reinhold Niebuhr published an essay entitled “The Religion of Abraham Lincoln,” while in 1963 William J. Wolfe of the Episcopal Theological School of Cambridge, Massachusetts claimed that “Lincoln is one of the greatest theologians in America,” in the sense “of seeing the hand of God intimately in the affairs of nations.” Sometime in the early 1960s and afterwards, in other words, the idea took root among some literary intellectuals that the United States was a religious society—not one based on an entirely secular philosophy.

At least when it comes to Lincoln, at any rate, there’s good reason to doubt this story: far from being a religious person, Lincoln has often been described as non-religious or even an atheist. His longtime friend Jesse Fell—so close to Lincoln that it was he who first suggested what became the famous Lincoln-Douglas debates—for instance once remarked that Lincoln “held opinions utterly at variance with what are usually taught in the church,” and Lincoln’s law partner William Herndon—who was an early fan of Charles Darwin’s—said that the president also was “a warm advocate of the new doctrine.” Being committed to the theory of evolution—if Lincoln was—doesn’t mean, of course, that the president was therefore anti-religious, but it does mean that the notion of Lincoln as religious mystic has some accounting to do: if he was, it apparently was in no very simple way.

Still, as mentioned the view of Lincoln as a kind of prophet did achieve at least some success within American letters—but, as Wills argues in Lincoln at Gettysburg, that success has in turn obscured what Lincoln really argued concerning the structure of American politics. As Wills remarks for instance, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster, and few if any have considered Webster a mystic.” Webster’s views, in turn, descend from a line of American thought that goes back to the Revolution itself—though its most significant moment was at the Constitutional Convention of 1787.

Most especially, to one James Wilson, a Scottish emigrant, delegate to the Constitutional Convention of 1787, and later one of the first justices of the Supreme Court of the United States. If Lincoln got his notions of the Union from Webster, then Webster got his from Supreme Court Justice Joseph Story: as Wills notes, Theodore Parker, the Boston abolitionist minister, once remarked that “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” Story, for his part, got his notion from Wilson: as Linda Przybyscewski notes in passing in her book, The Republic According to John Marshall Harlan (a later justice), Wilson was “a source for Joseph Story’s constitutional nationalism.” And Wilson’s arguments concerning the constitution—which he had a strong hand in making—were hardly religious.

At the constitutional convention, one of the most difficult topics to confront the delegates was the issue of representation: one of the motivations for the convention itself, after all, was the fact that under the previous terms of government, the Articles of Confederation, each state, rather than each member of the Continental Congress, possessed a vote. Wilson had already, in 1768, attacked the problem of representation as being one of the foremost reasons for the Revolution itself—the American colonies were supposed, by British law, to be fully as much British subjects as a Londoner or Mancunian, but yet had no representation in Parliament: “Is British freedom,” Wilson therefore asked in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament, “denominated from the soil, or from the people, of Britain?” That question was very much the predecessor of the question Wilson would ask at the convention: “For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land.

Wilson also made an argument that would later be echoed by Lincoln: he drew attention to the disparities of population between the several states. At the time of the convention, Pennsylvania—just as it is today—was a much more populous state than New Jersey was, a difference that made no difference under the Articles of Confederation, under which all states had the same number of votes: one. “Are not the citizens of Pennsylvania,” Wilson therefore asked the Convention, “equal to those of New Jersey? Does it require 150 of the former to balance 50 of the latter?” This argument would later be echoed by Lincoln, who, in order to illustrate the differences between free states and slave states, would—in October of 1854, at Peoria, in the speech that would mark his political comeback—note that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

The point of attack for both men, in other words, was precisely the same: the matter of representation in terms of what would later be called a “one man, one vote” standard. It’s an argument that hardly appears “mystical” in nature: since the matter turns, if anything, upon ratios of numbers to each other, it seems more aposit to describe the point of view adopted here as, if anything, “scientific”—if it weren’t for the fact that even the word “scientific” seems too dramatic a word for a matter that appears to be far more elemental.

Were Lincoln or Wilson alive today, then, it seems that the first point they might make about the gun control debate is that it is a matter about which the Congress is greatly at variance with public opinion: as Carl Bialik reported for FiveThirtyEight this past January, whenever Americans are polled “at least 70 percent of Americans [say] they favor background checks,” and furthermore that an October 2015 poll by CBS News and the New York Times “found that 92 percent of Americans—including 87 percent of Republicans—favor background checks for all gun buyers.” Yet, as virtually all Americans are aware, it has become essentially impossible to pass any sort of sensible legislation through Congress: a fact dramatized this spring by a “sit-down strike” in Congress by congressmen and congresswomen. What Lincoln and Wilson might further say about the point is that the trouble can’t be solved by such a “religious” approach: instead, what they presumably would recommend is that what needs to change is a system that inadequately represents the people. That isn’t the answer that’s on offer from academics and others on the American left, however. Which is to say that, soon enough, there will be another Adam Lanza to bewail—another of the sacrifices, one presumes, that the American left demands Americans must make to what one can only call their god.

Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.