Paper Moon

Say, it’s only a paper moon
Sailing over a cardboard sea
But it wouldn’t be make-believe
If you believed in me
—“It’s Only A Paper Moon” (1933).

 

As all of us sublunaries knows, we now live in a technological age where high-level training is required for anyone who prefers not to deal methamphetamine out of their trailer—or at least, that’s the story we are fed. Anyway, in my own case the urge towards higher training has manifested in a return to school; hence my absence from this blog. Yet, while even I recognize this imperative, the drive toward scientific excellence is not accepted everywhere: as longer-term readers may know, last year Michael Wilbon of ESPN wrote a screed (“Mission Impossible: African-Americans and Analytics”) not only against the importation of what is known as “analytics” into sports—where he joined arms with nearly every old white guy sportswriter everywhere—but, more curiously, essentially claimed that the statistical analysis of sports was racist. “Analytics” seem, Wilbon said, “to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” But while Wilbon may be dismissed because “analytics” is obviously friendlier to black people than many other forms of thought—it seems patently clear that something that pays more attention to actual production than to whether an athlete has a “good face” (as detailed in Moneyball) is going to be, on the whole, less racist—he isn’t entirely mistaken. Even if Wilbon appears, moronically, to think that his “enemy” is just a bunch of statheads arguing about where to put your pitcher in the lineup, or whether two-point jump shots are valuable, he can be taken seriously if he recognizes that his true opponent is none other than—Sir Isaac Newton.

Although not many realize it, Isaac Newton was not simply the model of genius familiar to us today as the maker of scientific laws and victim of falling apples. (A story he may simply have made up in order to fend off annoying idiots—a feeling with which, if you are reading this, you may be familiar.) Newton did, of course, first conjure the laws of motion that, on Boxing Day 1968, led William Anders, aboard Apollo 8, to reply “I think Isaac Newton is doing … the driving now” to a ground controller’s son who asked who was in charge of the capsule—but despite the immensity of his scientific achievements, those were not the driving (ahem) force of his curiosity. Newton’s main interests, as a devout Christian, were instead about ecclesiastical history—a topic that led him to perhaps the earliest piece of “analytics” ever written: an 87,000-word monstrosity the great physicist published in 1728.

Within the pages of this book is one of the earliest statistical studies ever written—or so at least Karl Pearson, called “the founder of modern statistics,” realized some two centuries later. Pearson started the world’s first statistics department in 1911, at the University College London; he either inaugurated or greatly expanded some half-dozen entire scientific disciplines, from meteorology to genetics. When Albert Einstein was a young graduate student, the first book his study group studied was a work of Pearson’s. In other words, while perhaps not a genius on the order of his predecessor Newton or his successor Einstein, Pearson was prepared to recognize a mind that was. More signifcantly, Pearson understood that, as he later wrote in the essay that furnishes the occasion for this one, “it is unusual for a great man even in old age to write absolutely idle things”: when someone immensely intelligent does something, it may not be nonsense no matter how much it might look it.

That’s what led Pearson, in 1928, to publish the short essay of interest here, which is about what could appear like the ravings of a religious madman, but as Pearson saw, weren’t: Newton’s 1728 The Chronology of Ancient Kingdoms amended, to which is prefixed: A Short Chronicle from the First Memory of Things in Europe to the Conquest of Persia by Alexander the Great. As Pearson understood, it’s a work of apparent madness that conceals depths of genius. But it’s also, as Wilbon might recognize (were he informed enough to realize it) it’s a work that is both a loaded gun pointed at African-Americans—and also, perhaps, a very tool of liberation.

The purpose of the section of the Chronology that concerned Pearson—there are others—was what Pearson called “a scientific study of chronology”: that is, Newton attempted to reconstruct the reigns of various kings, from contemporary France and England to the ancient rulers of “the Egyptians, Greeks and Latins” to the kings of Israel and Babylon. By consulting ancient histories, the English physicist compiled lists of various reigns in kingdoms around the world—and what he found, Pearson tells us, is that “18 to 20 years is the general average period for a reign.” But why is this, which might appear to be utterly recondite, something valuable to know? Well, because Newton is suggesting that by using this list and average, we can compare it to any other list of kings we find—and thereby determine whether the new list is likely to be spurious or not. The greater the difference between the new list of kingly reigns and Newton’s calculations of old lists, in short, the more likely it is that the new list is simply made up, or fanciful.

Newton did his study because he wanted to show that biblical history was not simply mythology, like that of the ancient Greeks: he wanted to show that the list of the kings of Israel exhibited all the same signs as the lists of kings we know to have really existed. Newton thereby sought to demonstrate the literal truth of the Bible. Now, that’s not something, as Pearson knew, that anyone today is likely much to care about—but what is significant about Newton’s work, as Pearson also knew, is that what Newton here realized was that it’s possible to use numbers to demonstrate something about reality, which was not something that had ever really been done before in quite this same way. Within Newton’s seeming absurdity, in sum, there lurked a powerful sense—the very same sense Bill James and others have been able to apply to baseball and other sports over the past generation and more, with the result that, for example, the Chicago Cubs (managed by Theo Epstein, Bill James’ acolyte) last year finally won, for the first time in more than a century, the final game of the season. In other words, during that nocturnal November moonshot on Chicago’s North Side last year, Sir Isaac Newton was driving.

With that example in mind, however, it might be difficult to see just why a technique, or method of thinking, that allows a historic underdog finally to triumph over its adversaries after eons of oppression could be a threat to African-Americans, as Michael Wilbon fears. After all, like the House of Israel, neither black people nor Cubs fans are unfamiliar with the travails of wandering for generations in the wilderness—and so a method that promises, and has delivered, a sure road to Jerusalem might seem to be attractive, not a source of anxiety. Yet, while in that sense Wilbon’s plea might seem obscure, even the oddest ravings of a great man can reward study.

Wilbon is right to fear statistical science, that is, for a reason that I have been exploring recently: of all things, the Voting Rights Act of 1965. That might appear to be a reference even more obscure than the descendants of Hammurabi, but in fact not so: there is a statistical argument, in other words, to be derived from Sections Two and Five of that act. As legal scholars know, those two sections form the legal basis of what are known as “majority minority districts”: as one scholar has described them, these are “districts where minorities comprise the majority or a sufficient percentage of a given district such that there is a greater likelihood that they can elect a candidate who may be racially or ethnically similar to them.” Since 1965, such districts have increasingly grown, particularly since a 1986 U.S. Supreme Court decision (Thornburg v. Gingles, 478 U.S. 30 (1986) that the Justice Department took to mandate their use in the fight against racism. The rise of such districts are essentially why, although there were fewer than five black congressmen in the United States House of Representatives prior to 1965, there are around forty today: a percentage of congress (slightly less than 10%) not much less than the percentage of black people in the American population (slightly more than 10%). But what appears to be a triumph for black people may not be, so statistics may tell us, for all Americans.

That’s because, according to some scholars, the rise in the numbers of black congressional representatives may also have effectively required a decline in the numbers of Democrats in the House: as one such researcher remarked a few years ago, “the growth in the number of majority-minority districts has come at the direct electoral expense of … Democrats.” That might appear, to many, to be paradoxical: aren’t most African-Americans Democrats? So how can more black reps mean fewer Democratic representatives?

The answer however is provided, again perhaps strangely, by the very question itself: in short, by precisely the fact that most (upwards of 90%) black people are Democrats. Concentrating black voters into congressional districts, in other words, also has the effect of concentrating Democratic voters: districts that elect black congressmen and women tend to see returns that are heavily Democratic. What that means, conversely, that these are votes that are not being voted in other districts: as Steven Hill put the point for The Atlantic in 2013, drawing up majority minority districts “had the effect of bleeding minority voters out of all the surrounding districts,” and hence worked to “pack Democratic voters into fewer districts.” In other words, majority minority districts have indeed had the effect of electing more black people to Congress—at the likely cost of electing fewer Democrats. Or to put it another way: of electing more Republicans.

It’s certainly true that some of the foremost supporters of majority minority districts have been Republicans: for example, the Reagan-era Justice Department mentioned above. Or Benjamin L. Ginsberg, who told the New York Times that such districts were “‘much fairer to Republicans, blacks and Hispanics” in 1992—when he was general counsel of the Republican National Committee. But while all of that is so—and there is more to be said about majority minority districts along these lines—these are only indirectly the reasons why Michael Wilbon is right to fear statistical thought.

That’s because what Michael Wilbon ought to be afraid of about statistical science, if he isn’t already, is what happens if somebody—with all of the foregoing about majority minority districts in mind, as well as the fact that Democrats have historically been far more likely to look after the interests of working people—happened to start messing around in a fashion similar to how Isaac Newton did with those lists of ancient kings. Newton, remember, used those old lists of ancient kings to compare them with more recent, verifiable lists of kings: by comparing the two he was able to make assertions about which lists were more or less likely to be the records of real kings. Nowadays, statistical science has advanced over Newton’s time, though at heart the process is the same: the comparison of two or more data sets. Today, through more sophisticated techniques—some invented by Karl Pearson—statisticians can make inferences about, for example, whether the operations recorded in one data set caused what happened in another. Using such techniques, someone today could use the lists of African-American congressmen and women and begin to compare them to other sets of data. And that is the real reason Michael Wilbon should be afraid of statistical thought.

Because what happens when, let’s say, somebody used that data about black congressmen—and compared it to, I don’t know, Thomas Piketty’s mountains of data about economic inequality? Let’s say, specifically, the share of American income captured by the top 0.01% of all wage earners? Here is a graph of African-American members of Congress since 1965:

Chart of African American Members of Congress, 1967-2012
Chart of African American Members of Congress, 1967-2012

And here is, from Piketty’s original data, the share of American income captured etc.:

Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998
Share of U.S. Income, .01% (Capital Gains Excluded) 1947-1998

You may wish to peruse the middle 1980s—perhaps coincidentally, right around the time of Thornburg v. Gingles both take a huge jump. Leftists, of course, may complain that this juxtaposition could lead to blaming African-Americans for the economic woes suffered by so many Americans—a result that Wilbon should, rightly, fear. But on the other hand, it could also lead Americans to realize that their political system, in which the number of seats in Congress are so limited that “majority minority districts” have, seemingly paradoxically, resulted in fewer Democrats overall, may not be much less anachronistic than the system that governed Babylon—a result that, Michael Wilbon is apparently not anxious to tell you, might lead to something of benefit to everyone.

Either thought, however, can lead to only one conclusion: when it comes to the moonshot of American politics, maybe Isaac Newton should still—despite the protests of people like Michael Wilbon—be driving.

Advertisements

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

The “Hero” We Deserve

“He’s the hero Gotham deserves, but not the one it needs …”
The Dark Knight. (2008).

 

The election of Donald Trump, Peter Beinart argued the other day in The Atlantic, was precisely “the kind of democratic catastrophe that the Constitution, and the Electoral College in particular, were in part designed to prevent.” It’s a fairly common sentiment, it seems, in some parts of the liberal press: Bob Cesca, of Salon, argued back in October that “the shrieking, wild-eyed, uncorked flailing that’s taking place among supporters of Donald Trump, both online and off” made an “abundantly self-evident” case for “the establishment of the Electoral College as a bulwark against destabilizing figures with the charisma to easily manipulate [sic] low-information voters.”  Such arguments often seem to think that their opponents are dewy-eyed idealists, their eyes clouded by Frank Capra movies: Cesca, for example, calls the view in favor of direct popular voting an argument for “popular whimsy.” In reality however it’s the supposedly-liberal argument in favor of the Electoral College that’s based on a misperception: what people like Beinart or Cesca don’t see is that the Electoral College is not a “bulwark” for preventing the election of candidates like Donald Trump—but in fact a machine for producing them. They don’t see it because they do not understand how the Electoral College is built on a flawed knowledge of probability—an argument in turn that, perhaps horrifically, suggests that the idea that powered Trump’s campaign, the thought that the American leadership class is dangerously out of touch with reality, is more or less right.

To see just how ignorant we all are concerning that knowledge, ask yourself this question (as Distinguished Research Scientist of the National Board of Medical Examiners Howard Wainer asked several years ago in the pages of American Scientist): what are the counties of the United States with the highest distribution of kidney cancer? As it happens, Wainer noted, they “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, say, in view of the fact that rural areas tend to be freer of the pollution that infects the largest cities. But, Wainer continued, consider also that the American counties with the lowest distribution of kidney cancer … “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, Wainer remarks, due to “the poverty of the rural lifestyle.” After all, people in rural counties very often don’t receive the best medical care, tend to eat worse, and tend to drink too much and use too much tobacco. But wait—one of these stories has to be wrong, they can’t both be right. Yet as Wainer goes on to write, they both are true: rural American counties have both the highest and the lowest incidences of kidney cancer. But how?

To solve the seeming-mystery, consider a hypothetical example taken from the Nobel Prize-winner Daniel Kahneman’s magisterial book, Thinking: Fast and Slow. “Imagine,” Kahneman says, “a large urn filled with marbles.” Some of these marbles are white, and some are red. Now imagine “two very patient marble counters” taking turns drawing from the urn: “Jack draws 4 marbles on each trial, Jill draws 7.” Every time one of them draws an unusual sample—that is, a sample of marbles that is either all-red or all-white—each records it. The question Kahneman then implicitly asks is: which marble counter will draw more all-white (or all-red) samples?

The answer is Jack—“by a factor of 8,” Kahneman notes: Jack is likely to draw a sample of only one color more than twelve percent of the time, while Jill is likely to draw such a sample less than two percent of the time. But it isn’t really necessary to know high-level mathematics to understand that because Jack is drawing fewer marbles at a time, it is more likely that he will draw all of one color or the other than Jill is. By drawing fewer marbles, Jack is simultaneously more exposed to extreme events—just as it is more likely that, as Wainer has observed, a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category,” while conversely if that same county “has one cancer death it would be among the highest.” Because there are fewer people in rural American counties than urban ones, a rural county will have a more extreme rate of kidney cancer, either high or low, than an urban one—for the very same reason that Jack is more likely to have a set of all-white or all-red marbles. The sample size is smaller—and the smaller the sample size, the more likely it is that the sample will be an outlier.

So far, of course, I might be said to be merely repeating something everyone already knows—maybe you anticipated the point about Jack and Jill and the rural counties, or maybe you just don’t see how any of this has any bearing beyond the lesson that scientists ought to be careful when they are designing their experiments. As many Americans think these days, perhaps you think that science is one thing, and politics is something else—maybe because Americans have been taught for several generations now, by people as diverse as conservative philosopher Leo Strauss and liberal biologist Stephen Jay Gould, that the humanities are one thing and the sciences are another. (Which Geoffrey Harpham, formerly the director of the National Humanities Center, might not find surprising: Harpham has claimed that “the modern concept of the humanities” —that is, as something distinct from the sciences—“is truly native only to the United States.”) But consider another of Wainer’s examples: one drawn from, as it happens, the world of education.

“In the late 1990s,” Wainer writes, “the Bill and Melinda Gates Foundation began supporting small schools on a broad-ranging, intensive, national basis.” Other foundations supporting the movement for smaller schools included, Wainer reported, the Annenberg Foundation, the Carnegie Corporation, George Soro’s Open Society Institute, and the Pew Cheritable Trusts, as well as the U.S. Department of Education’s Smaller Learning Communities Program. These programs brought pressure—to the tune 1.7 billion dollars—on many American school systems to break up their larger schools (a pressure that, incidentally, succeeded in cities like Los Angeles, New York, Chicago, and Seattle, among others). The reason the Gates Foundation and its helpers cited for pressuring America’s educators was that, as Wainer writes, surveys showed that “among high-performing schools, there is an unrepresentatively large proportion of smaller schools.” That is, when researchers looked at American schools, they found the highest-achieving schools included a disproportionate number of small ones.

By now, you see where this is going. What all of these educational specialists didn’t consider—but Wainer’s subsequent research found, at least in Pennsylvania—was that small schools were also disproportionately represented among the lowest-achieving schools. The Gates Foundation (led, mind you, by Bill Gates) had simply failed to consider that of course small schools might be overrepresented among the best schools, simply because schools with smaller numbers of students are more likely to be extreme cases. (Something that, by the way, also may have consequences for that perennial goal of professional educators: the smaller class size.) Small schools tend to be represented at the extremes not for any particular reason, but just because that’s how math works.

The inherent humor of a group of educators (and Bill Gates) not understanding how to do basic mathematics is, admittedly, self-evident—and incidentally good reason not to take the testimony of “experts” at face value. But more significantly, it also demonstrates the very real problem here: if highly-educated people (along with college dropout Gates) cannot see the flaws in their own reasoning while discussing precisely the question of education, how much more vulnerable is everyone else to flaws in their thinking? To people like Bob Cesca or Peter Beinart (or David Frum; cf. “Noble Lie”), of course, the answer to this problem is to install more professionals, more experts, to protect us from our own ignorance: to erect, as Cesca urges, a “firewall[…] against ignorant populism.” (A wording that, one imagines, reflects Cesca’s mighty struggle to avoid the word “peasants.”) The difficulty with such reasoning, however, is that it ignores the fact that the Electoral College is an instance of the same sort of ignorance as that which bedeviled the Gates Foundation—or that you may have encountered in yourself when you considered the kidney cancer example above.

Just as rural American counties, that is, are more likely to have either lots of cases—or very few cases—of kidney cancer, so too must those very same sparsely-populated states be more likely to vote in an extreme fashion inconsistent with the rest of the country. For one, it’s a lot cheaper to convince the voters of Wyoming (the half a million or so of whom possess not only a congressman, but also two senators) than the voters of, say, Staten Island (who, despite being only slightly less in number than the inhabitants of Wyoming, have to share a single congressman with part of Brooklyn). Yet the existence of the Electoral College, according to Peter Beinart, demonstrates just how “prescient” the authors of the Constitution were: while Beinart says he “could never have imagined President Donald Trump,” he’s glad that the college is cleverly constructed so as to … well, so far as I can tell Beinart appears to be insinuating that the Electoral College somehow prevented Trump’s election—so, yeeaaaah. Anyway, for those of us still living in reality, suffice it to say that the kidney cancer example illustrates just how dividing one big election into fifty smaller ones inherently makes it more probable that some of those subsidiary elections will be outliers. Not for any particular reason, mind you, but simply because that’s how math works—as anyone not named Bill Gates seems intelligent enough to understand once it’s explained.

In any case, the Electoral College thusly does not make it less likely that an outlier candidate like Donald Trump is elected—but instead more likely that such a candidate would be elected. What Beinart and other cheerleaders for the Electoral College fail to understand (either due to ignorance or some other motive) is that the Electoral College is not a “bulwark” or “firewall” against the Donald Trumps of the world. In reality—a place that, Trump has often implied, those in power seem not to inhabit any more—the Electoral College did not prevent Donald Trump from becoming the president of the United States, but instead (just as everyone witnessed on Election Day), exactly the means by which the “short-fingered vulgarian” became the nation’s leader. Contrary to Beinart or Cesca, the Electoral College is not a “firewall” or some cybersecurity app—it is, instead, a roulette wheel, and a biased one at that.

Like a sucker can expect that, so long as she stays at the roulette wheel, she will eventually go bust, thusly so too can the United States expect, so long as the Electoral College exists, to get presidents like Donald Trump: “accidental” presidencies, after all, have been an occasional feature of presidential elections since at least 1824, when John Quincy Adams was elected despite the fact that Andrew Jackson had won the popular vote. If not even the watchdogs of the American leadership class—much less that class itself—can see the mathematical point of the argument against the Electoral College, that in and of itself is pretty good reason to think that, while the specifics of Donald Trump’s criticisms of the Establishment during the campaign might have been ridiculous, he wasn’t wrong to criticize it. Donald Trump then may not be the president-elect America needs—but he might just be the president people like Peter Beinart and Bob Cesca deserve.

 

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.        

At Play In The Fields Of The Lord

Logo for 2015 US Amateur at Olympia Fields Country Club
Logo for 2015 US Amateur at Olympia Fields Country Club

 

Behold, I send you forth as sheep in the midst of wolves:
be ye therefore wise as serpents, and harmless as doves.
—Matthew 10:16

Now that the professional, Open tournaments are out of the way, the U.S. Amateur approaches. A tournament that has always been a symbol of wealth and discrimination—the Amateur was a tournament invented specifically to keep out the riff-raff of professional golfers—the site of this year’s edition might be considered particularly unfortunate considering that this year the tournament will fall just more than a year after the Michael Brown shooting in Ferguson, Missouri: Olympia Fields, in Chicago’s south suburbs, is a relatively wealthy enclave among a swath of exceedingly poor villages and towns very like the terrain of the St. Louis suburbs just a few hundred miles away. Yet there’s a deeper irony at work here that might be missed even by those who’d like to point out that similarity of setting: the format of the tournament, match-play, highlights precisely what the real message of the Brown shooting was. That real message, the one that is actually dangerous to power, wasn’t the one shouted by protestors—that American police departments are “racist.” The really dangerous message is the one echoed by the Amateur: a message that, read properly, tells us that our government’s structure is broken.

The later rounds of U. S. Amateur are played under golf’s match play, rather than stroke play, rules—a difference that will seem arcane to those unfamiliar with the sport, but is a very significant difference nevertheless. In stroke play, competitors play whatever number of holes are required—in professional tournaments, usually 72 holes—and count up however many strokes each took: the player with the fewest strokes wins. Match play however is not the same: in the first place, because in stroke play each golfer is effectively playing against every other player in the field, because all the strokes of every player count. But this is not so in match play.

In the first place, match play consists of, as the name suggests, matches: that is, once the field is cut to the 64 players with the lowest score after an initial two-day stroke play tournament, each of those 64 contestants plays an 18-hole match against one other contestant. The winner of each of these matches then proceeds to move on, until there is a champion—a single-elimination tournament that is exactly like the NCAA basketball tournament held every year in March. The winner of each match in turn, as John Van der Borght says on the website of the United States Golf Association, “is the player who wins the most holes.” That is, what matters on every hole is just whether the golfer has shot a lower score than the opponent for that hole, not overall. Each hole starts the competition again, in other words—like flipping coins, what happened in the past is irrelevant. It’s a format that might sound hopeful, because on each hole whatever screw-ups a player commits are consigned to the dustbin of history. In fact, however, it’s just this element that makes match-play the least egalitarian of formats—and ties it to Ferguson.

Tournaments conducted under match play rules are always subject to a kind of mathematical oddity called a Simpson’s Paradox: such a paradox occurs when, as the definition on Wikipedia says, it “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” For example, as I have mentioned in this blog before, in the first round of the PGA Tour’s 2014 Accenture Match Play tournament in Tucson, an unknown named Pedro Larrazabal shot a 68 to Hall-of-Famer Ernie Els’ 75—but because they played different opponents, Larrazabal was out of the tournament and Els was in. Admittedly, even with such an illustration the idea might still sound opaque, but the meaning can be seen by considering, for example, the tennis player Roger Federer’s record versus his rival Rafael Nadal.

Roger Federer has won 17 major championships in men’s tennis, a record—and yet many people argue that he is not the Greatest Of All Time (G.O.A.T.). The reason those people can argue that is because, as Michael Steinberger pointed in the New York Times not long ago, Federer “has a losing record against Nadal, and a lopsided one at that.” Steinberger then proceeded to argue why that record should be discarded and Federer should be called the “GOAT” anyway. But weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—what an anonymous blogger did in 2009: a feat that demonstrates just what a Simpson’s Paradox is, and how it might apply both to the U.S. Amateur and Ferguson, Missouri.

What that blogger did, on a blog entitled SW19—a reference to the United Kingdom’s postal code for Wimbledon, the great tennis arena—was he counted up the points.

Let me repeat: he counted up the points.

That might sound trivial, of course, but as the writer of the SW19 blog realized, tennis is a game that abounds in Simpson’s Paradoxes: that is, it is a game in which it is possible to score fewer points than your opponent, but still win the match. Many people don’t realize this: it might be expected, for example, that because Nadal has an overwhelmingly-dominant win-loss record versus Federer, he must also have won an equally-dominant number of points from the Swiss champion. But an examination of the points scored in each of the matches between Federer and Nadal demonstrates that in fact the difference between them was miniscule.

The SW19 blogger wrote his post in 2009; at that time Nadal led Federer by 13 matches to 7 matches, a 65 percent winning edge for the Spaniard, Nadal. Of those 20 matches, Nadal won the 2008 French Open—played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In those 20 matches, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like golf’s stroke play format, in which every “point” (stroke) is simply added to the total and the winner has the fewest points—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet this, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers–a total of less than one percent of all the points scored. Not merely that, but if that single match at the 2008 French Open is excluded, then the margin becomes eight points.  The mathematical difference between Nadal and Federer, thus, is the difference between a couple of motes of dust on the edge of a coin while it’s being flipped—if what is measured is the act that is the basis of the sport, the act of scoring points. In terms of points scored, Nadal’s edge is about a half of percentage point—and most of that percentage was generated by a single match. But Nadal had a 65 percent edge in their matches.

How did that happen? The answer is that the structure of tennis scoring is similar to that of match play in golf: the relation between wins and points isn’t direct. In fact, as the SW19 blogger shows, of the twenty matches Nadal and Federer had played to that moment in 2009, Federer had actually scored more points than Nadal in three of them—and still lost the match. If there were a direct relation between points and wins in tennis, that is, the record between Federer and Nadal would actually stand even, at 10-10, instead of what it was in reality, 13-7—a record that would have accurately captured the real point differential between them. But because what matters in tennis isn’t—exactly—the total number of points you score, but instead the numbers of games and sets you win, it is entirely possible to score more points than your opponent in a tennis match—and still lose. (Or, the converse.)

The reason why that is possible, as Florida State University professor Ryan Rodenberg put it in The Atlantic not long ago, is due to “tennis’ decidedly unique scoring system.” (Actually, not unique, because as might be obvious by now match play golf is scored similarly.) In sports like soccer, baseball, or stroke play golf, as sports psychologist Allen Fox once wrote in Tennis magazine, “score is cumulative throughout the contest … and whoever has the most [or, in the case of stroke play golf, least] points at the end wins.” But in tennis things are different: “[i]f you reach game point and win it, you get the entire game while your opponent gets nothing—all the points he or she won in the game are eliminated.” Just in the same way that what matters in tennis is the game, not the point, in match play golf all that matters is the hole, and not the stroke.

Such scoring systems breed Simpson’s Paradoxes: that is, results that don’t reflect the underlying value a scoring system is meant to reflect—we want our games to be won by the better player, not the lucky one—but instead are merely artifacts of the system used to measure. The point (ha!) can be shown by way of an example taken from a blog written by one David Smith, head of marketing for a company called Revolution Analytics, about U.S. median wages. In that 2013 post, Smith reported that the “median US wage has risen about 1%, adjusted for inflation,” since 2000. But was that statistic important—that is, did it measure real value?

Well, what Smith found was that wages for high school dropouts, high school graduates, high school graduates with some college, college graduates, and people with advanced degrees all fell over the same period. Or, as Smith says, “within every educational subgroup, the median wage is now lower than it was in 2000.” But how can it be that “overall wages have risen, but wages within every subgroup have fallen?” The answer is similar to the reason why Rafael had a 65 percent winning margin against Federer: although there are more college graduates now than in 2000, the wages of college graduates haven’t fallen (1.2%) as far as, say, high school dropouts (7.9%). So despite the fact that everyone is poorer—everyone is receiving lower wages, adjusted for inflation—than in 2000, mendacious people can say wages are actually up. Wages are up—if you “compartmentalize” the numbers in just the way that reflects the story you’d like to tell.

Now, while the story about American wages might suggest a connection to Ferguson—and it does—that isn’t the connection between the U.S. Amateur and Ferguson, Missouri, I’d like to discuss. That connection is this one: if the trouble about the U.S. Amateur is that it is conducted under match play—a format that permits Simpson’s Paradox results—and Simpson’s Paradoxes are, at heart, boundary disputes—arguments about whether to divide up the raw data into smaller piles or present them as one big pile—then that suggests the real link to Ferguson because the real issue behind Darren Wilson’s shooting of Michael Brown then isn’t racism—or at least, the way to solve it isn’t to talk about racism. Instead, it’s to talk borders.

After Ferguson police officer Darren Wilson shot Michael Brown last August, the Department of Justice issued a report that was meant, as Zoë Carpenter of The Nation wrote this past March, to “address the roots of the police force’s discriminatory practices.” That report held that those practices were not “simply the result of racist cops,” but instead stemmed “from the way the city preys on residents financially, relying on the fines that accompany even minor offenses to balance its budget.” The report found an email from Ferguson’s finance director to the town’s police chief that, Carpenter reported, said “unless ticket writing ramps up significantly before the end of the year, it will be hard to significantly raise collections next year.” The finance director’s concerns were justified: only slightly less than a quarter of Ferguson’s total budget was generated by traffic tickets and other citations. The continuing operation of the town depends on revenue raised by the police—a need, in turn, that drives the kind of police zealotry that the Department of Justice said contributed to Brown’s death.

All of which might seem quite far from the concerns of the golf fans watching the results of the matches at the U.S. Amateur. Yet consider a town not far from Ferguson: Beverly Hills, Missouri. Like Ferguson, Beverly Hills is located to the northwest of downtown St. Louis, and like Ferguson it is a majority black town. But where Ferguson has over 20,000 residents, Beverly Hills has only around 600 residents—and that size difference is enough to make the connection to the U.S. Amateur’s format of play, match play, crystalline.

Ferguson after all is not alone in depending so highly on police actions for its revenues: Calverton Park, for instance, is another Missouri “municipality that last fiscal year raised a quarter of its revenue from traffic fines,” according to the St. Louis Post-Dispatch. Yet while Ferguson, like Calverton Park, also raised about a quarter of its budget from police actions, Beverly Hills raised something like half of its municipal budget on traffic and other kinds of citations, as a story in the Washington Post. All these little towns, all dependent on traffic tickets to meet their budgets; “Most of the roughly ninety municipalities in St. Louis County,” Carpenter reports in The Nation, “have their own courts, which … function much like Ferguson’s: for the purpose of balancing budgets.” Without even getting into the issue of the fairness of property taxes or sales taxes as a basis for municipal budgeting, it seems obvious that depending on traffic tickets as a major source of revenue is poor planning at best. Yet without the revenue provided by cops writing tickets—and, as a result of Ferguson, the state of Missouri is considering limiting the percentage of a town’s budget that can be raised by such tickets, as the St. Louis Dispatch article says—many of these towns will simply fail. And that is the connection to the U.S. Amateur.

What these towns are having to consider in other words is, according to the St. Louis Post-Dispatch, an option mentioned by St. Louis County Executive Steve Stenger last December: during an interview, the official said that “the consolidation of North County municipalities is what we should be talking about” in response to the threat of cutting back reliance on tickets. Small towns like Beverly Hills may simply be too small: they create too little revenue to support themselves without a huge effort on the part of the police force to find—and thus, in a sense, create—what are essentially taxable crimes. The way to solve the problem of a “racist” police department, in other words, might not be to conduct workshops or seminars in order to “retrain” the officers on the frontline, but instead to redrawn the political boundaries of the greater St. Louis metropolitan area.

That, at least, is a solution that our great-grandparents considered, as an article by writer Kim-Mai Cutler for Tech Crunch this past April remarked. Examining the historical roots of the housing crisis in San Francisco, Cutler discovered that in “1912, a Greater San Francisco movement emerged and the city tried to annex Oakland,” a move Oakland resisted. Yet as a consequence of not creating a Bay-wide government, Cutler says, “the Bay Area’s housing, transit infrastructure and tax system has been haunted by the region’s fragmented governance” ever since: the BART (Bay Area Regional Transit) system, for example, as originally designed “would have run around the entire Bay Area,” Cutler says, “but San Mateo County dropped out in 1961 and then Marin did too.” Many of the problems of that part of Northern California could be solved, Cutler thusly suggests via this and other instances—contra the received wisdom of our day—by bigger, not smaller, government.

“Bigger,” that is, in the sense of “more consolidated”: by the metric of sheer numbers, a government built to a larger scale might not employ as many people as do the scattered suburban governments of America today. But what such a government would do is capture all of the efficiencies of economies of scale available to a larger entity—thus, it might be in a sense smaller than the units it replaced, but definitely would be more powerful. What Missourians and Californians—and possibly others—may be realizing then is that the divisions between their towns are like the divisions tennis makes around its points, or match play golf makes around its strokes: dividing a finite resource, whether points or strokes or tax dollars (or votes), into smaller pools creates what might be called “unnatural,” or “artificial,” results—i.e., results that inadequately reflect the real value of the underlying resource. Just like match play can make Ernie Els’ 75 look better than Pedro Larrazabal’s 68, or tennis’ scoring system can make Rafael Nadal look much better than Federer—when in reality the difference between them is (or was) no more than a sliver of a gnat’s eyelash—dozens of little towns dissipate the real value, economic and otherwise, of the people that inhabit a region.

That’s why when Eric Holder, Attorney General for the United States, said that “the underlying culture” of the police department and court system of Ferguson needs to be reformed, he got it exactly wrong. The problems in St. Louis and San Francisco, the evidence suggests, are created not because government is getting in the way, but because government isn’t structured correctly to channel the real value of the people: scoring systems that leave participants subject to the vagaries of Simpson’s Paradox results might be perfectly fine for games like tennis or golf—where the downsides are minimal—but they shouldn’t be how real life gets scored, and especially not in government. Contra Holder, the problem is not that the members of the Ferguson police department are racists. The problem is that the government structure requires them, like occupying soldiers or cowboys, to view their fellow citizens as a kind of herd. Or, to put the manner in a pithier way: A system that depends on the harvesting of sheep will turn its agents into wolves. Instead of drowning the effects of racism—as a big enough government would through its very size—multiplying struggling towns only encourages racism: instead of diffusing racism, a system broken into little towns focuses it. The real problem of Ferguson then—the real problem of America—is not that Americans are systematically discriminatory: it’s that the systems used by Americans aren’t keeping the score right.

Mr. Tatum’s Razor

Arise, awake, and learn by approaching the exalted ones, for that path is sharp as a razor’s edge, impassable, and hard to go by, say the wise.
Katha Upanishad 1-III-14

Plurality is never to be posited without necessity.
—William of Ockham. Questions on the Sentences of Peter Lombard. (1318).

“The United States had lost. And won.” So recently wrote the former European and present naturalized American John Cassidy when Team USA advanced out of the “group stage” in the World Cup soccer tournament despite losing its last game of that stage. (To Germany, 1-0.) So even though they got beat, it’s the first time the U.S. has advanced out of the group stage in back-to-back Cups. But while the moment represented a breakthrough by the team, Cassidy warns it hasn’t been accompanied by a breakthrough in the fandom: “don’t ask [Americans] to explain how goal difference works,” he advises. He’s right that most are unfamiliar with the rule that allowed the Americans to play on, but he’s wrong if he’s implying that Americans aren’t capable of understanding it: the “sabermetric revolution”—the statistical study of the National Pastime—begins by recognizing the same principle that also backs goal difference. Yet while thus there’s precedent to think that Americans could understand goal difference—and, maybe, accept soccer as a big-time sport—there’s one reason to think America can’t: the American political system. And, though that might sound wacky enough for any one piece of writing, golf—a sport equally at home in America and Europe—is ideally suited to explain why.

Goal difference is a procedure that applies at the opening stage of the World Cup, which is organized differently than other large sporting tournaments. The NCAA college basketball tournament, for instance, is an “elimination” type tournament: sorts each of its 64 teams into four different brackets, then seeds each bracket from a #1 ranked team to a #16 ranked team. Each team then plays the team on the opposite side of the bracket, so that the the best team plays the lowest ranked team, and so on. Winning allows a team to continue; losing sends that team home, which is what makes it an “elimination” type of tournament.

The World Cup also breaks its entrants into smaller groups, and for the same reason—so that the best teams don’t play each other too early—but that’s where the similarities end. The beginning, “group” stage of the tournament is conducted in a round-robin format: each team in a group plays every other team in a group. Two teams from each group then continue to the next part of the competition.

Because the group stage is played under a round-robin, rather than elimination, structure losing a game doesn’t result necessarily in exiting the tournament—which is not only how the United States was not eliminated from competition by losing to Germany, but also is what makes the World Cup un-American in Cassidy’s estimation. “Isn’t cheering a team of losers,” Cassidy writes, “an un-American activity?” But there’s at least two questionable ideas packed into this sentence: one is that a team that has lost—a “loser”—is devoid of athletic ability, or what we might call value, and secondly that “losers” are un-American, or anyway that cheering for them is.

The round-robin format of the group stage after all just means that the tournament does not think a loss of a game necessarily reveals anything definitive about the value of a team: only a team’s record against all the other teams in its group does that. If the tournament is still unsure about the value of a team—that is, if two or more teams are tied for best, or second-best (two teams advance) record—then the tournament also looks at other ways to determine value. That’s what “goal difference,” or differential, is: as Ken Boehlke put it on CBSports website (“Understanding FIFA World Cup Procedures”), goal difference is “found by simply subtracting a team’s goals against from its goals scored.” What that means is that by the way the World Cup reckons things, it’s not only important whether a team lost a close game, but it’s also important if that team wins a blow-out.

Goal difference was, as Cassidy says, the reason why the American team was able to be one of the two teams of each group to advance. It’s true that the Americans were tied by win-loss record with another team in their group, Portugal. But the Americans only lost to Germany by one goal, while earlier in the stage the Portuguese lost 4-0. That, combined with some other results, meant that the United States advanced and Portugal did not. What the World Cup understands, is that just winning games isn’t necessarily a marker of a team’s quality, or value: what also matters is how many goals a team allows, and scores.

Now, John Cassidy appears to think that this concept is entirely foreign to Americans, and maybe he’s right—except for any of the Americans who happen to have seen the movie Moneyball, which not only grossed over $75 million dollars in the United States and was nominated for six Oscars but also starred Brad Pitt. “What are you really worth?” was the film’s tagline, and in the speech that is the centerpiece of the movie, the character Peter Brand (played by Jonah Hill, another fairly well-known actor) says to his boss—general manager of the Oakland A’s Billy Beane (played by Pitt)—that “Your goal … should be to buy wins. And in order to buy wins, you need to buy runs.” And while Moneyball, the film, was released just a few years ago, the ideas that fuel it have been around since the 1970s.

To be sure, it’s hardly news that scoring points results in winning games—the key insight is that, as Graham MacAree put it on the website FanGraphs, it is “relatively easy to predict a team’s win-loss record using a simple formula,” a formula that was invented a man named Bill James in the 1970s. The formula resembled the classic Pythagorean Theorem that James called it the Pythagorean Expectation: what it expressed was that the ratio of a team’s past runs scored to runs allowed is a better predictor of future success (i.e., future wins and losses) than that team’s past ratio of wins to losses. What it meant was that, to quote MacAree again, “pure pythagorean expectancy is probably a better way of gauging a team than actual wins and losses.” Or to put it another way, knowing how many runs a team scored versus how many that team’s opponents scored is more valuable than knowing how many games it won.

What the Pythagorean Expectation model and the goal difference model do, then, concentrate focus on what is the foundational act of their respective sports: scoring goals and scoring runs. Conversely, both weaken attention on winning and losing. That might appear odd: isn’t the point of playing a game to win, not (just) to score? But what both these methods realize is that a focus on winning and losing, instead of scoring, is vulnerable to a particular statistical illusion called a Simpson’s Paradox.

As it happens, an episode of the television series Numb3rs used a comparison of the batting averages of Derek Jeter and David Justice in the middle 1990s to introduce the idea of what a Simpon’s Paradox is, which seems tailor-made for the purpose. Here is a table—a more accurate one than the television show used—that shows those averages during the 1995, 1996, and 1997 seasons:

1995

1996

1997

Totals

Derek Jeter

12/48

.250

183/582

.314

190/654

.291

385/1284

.300

David Justice

104/411

.253

45/140

.321

163/495

.329

312/1046

.298

Compare the year-by-year averages: Jeter, you will find, has a worse average than Justice in every year. Then compare the two players’ totals: Jeter actually has a slightly better average than Justice. A Simpson’s Paradox results, as the Stanford Encyclopedia of Philosophy puts it, a when the “structures that underlie” a set of facts “invalidate … arguments that many people, at least initially, take to be intuitively valid.” Or as the definition on Wikipedia describes it, a bit more elegantly, the paradox occurs when “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” In this case, if we consider the data year-by-year, it seems like Justice is a better hitter than Jeter—but when we consolidate all of the data, it supports the notion that Jeter is better than Justice.

There’s at least two ways we can think that the latter hypothesis is the more likely: one is the simple fact that 1995 was Derek Jeter’s first appearance in the major leagues, because he was born in 1974, whereas Justice was already a veteran player who was born eight years earlier. Jeter is younger. Quite obviously then from the perspective of a general manager looking at these numbers after the 1997 season, buying Jeter is a better move because more of Jeter’s career is available to be bought: since Jeter is only retiring this year (2014), that means that in 1997 there was 17 seasons of Derek Jeter available, whereas since David Justice retired in 2002, there were only 5 more seasons of David Justice available. Of course, none of that information would have been available in 1997—and injuries are always possible—but given the age difference it would have been safe to say that, assuming you valued each player relatively equally on the field, Jeter was still more valuable. In one sense though that exercise isn’t very helpful, because it doesn’t address just what Simpson’s Paradox has to do with thinking about Derek Jeter.

In another though it has everything to do with it. The only question that matters about a baseball player, says Bill James, is “If you were trying to win a pennant, how badly would you want this guy?” Or in other words, don’t be hypnotized by statistics. It sounds like a simple enough lesson, which in a way it is—but it’s terribly difficult to put into practice. In this case, it is terribly easy to become mystified by the two players’ batting averages, but what James might advise is to look at the events that these numbers represent: instead of looking at the averages, look at the components of those averages.

 What looking at the raw numbers reveals is that Jeter had more hits than Justice over the three seasons: 385 to 312. That difference matters because—unlike the difference in batting average over the same period, which is only a couple of points—78 more hits is a lot more hits, and as James wrote in The New Bill James Historical Baseball Abstract, the “essential measure of a hitter’s success is how many runs he has created.” Further, without getting too far into the math of it, smart people who’ve studied baseball have found that a single hit is worth nearly half a run. (Joe Posnanski, former Senior Writer at Sports Illustrated and one of those people, has a nice post summarizing the point called “Trading Walks For Hits” at joeposnanski.com.) What that would mean is that Jeter may have created more runs than Justice did over the same period: depending on the particular method used, perhaps more than twenty more runs. And since runs create wins (that conversion being calculated as about ten runs to the win) that implies that Jeter likely helped his team to two more wins than Justice did over the same period.

To really know which player contributed more to winning would require a lot more investigation than that, but the point is that following James’ method leads towards the primary events that generate outcomes, and away from the illusions that a focus on outcomes foster. Wins are generated by runs, so focus on runs; runs are created by hits, so focus on hits. So too does goal difference mean that while the World Cup recognizes wins, it also recognizes the events—goals—that produce wins. Put that way, it sounds quite commonsensical—but in fact James was lucky in a sense to stumble upon it: because there are two ways to organize sport, and only one of those types is amenable to this kind of analysis. It was fortunate, both to James and to baseball, that he was a fan of a game that could easily be analyzed this way.

In sports like baseball, there’s a fairly predictable relationship between scoring and winning. In other sports though there isn’t, and that’s why golf is very important. It is a sport that under one way to play it the sport is very amenable to means of analysis like the World Cup’s goal difference or Bill James’ Pythagorean Expectation. Golf however also has another way to play, and that way does not have a predictable relationship between scores and wins. What the evidence will show is that having two different forms to the sport isn’t due to a mistake on the part of the designers’: it’s that each form of the game was designed for a different purpose. And what that will show, I will argue, is that whether a game has one sort of scoring system or the other predicts what the purpose of the design is—and vice versa.

On the PGA Tour, the standard tournament consists of four rounds, or 72 holes, at the end of which the players who have made it that far add up their scores—their number of strokes—and the lowest one wins. In the Rules of Golf, this format is known as “stroke play.” That’s what makes it like the group stage of the World Cup or Bill James’ conception of baseball: play begins, the players attempt some action that produces a “score” (however that is determined), and at the end of play each of those scoring events is added together and compared. The player or team that produces the right amount of these “scoring events” is then declared the winner. In short, under the rules of stroke play—just as to the World Cup’s group stage, or to Bill James’ notion of baseball—there is a direct relationship between the elemental act of the game, scoring, and winning.

But the format most often used by golf’s professionals is not the only method available: many amateur tournaments, such as the United States Amateur, use the rules format known as “match play.” Under this format, the winner of the contest is not necessarily the player who shoots the lowest overall score, as in stroke play. Instead, as John Van der Borght has put the matter on the website of the United States Golf Association, the official rule-making body of the sport, in match play the “winner is the player who wins the most holes.” It’s a seemingly minor difference—but in fact it creates such a difference that match play is virtually a different sport than stroke play.

Consider, for instance, this year’s Accenture Match Play tournament, held at the Dove Mountain course near Tucson, Arizona. (The only tournament on the PGA Tour to be held under match play rules.)  “Factoring in conceded putts,” wrote Doug Ferguson of the Associated Press earlier this season, “Pablo Larrazabal shot a 68 and was on his way back to Spain,” while “Ernie Els shot 75 and has a tee time at Dove Mountain on Thursday.” In other words, Larrazabal lost his match and Els won his, even though Larrazabal played better than Els. Intuitively, Larrazabal was the better player at this tournament, which would lead to thinking Larrazabal continued to play and Els exited—but the actual results conclude the reverse. It’s a Simpson’s Paradox, and unlike stroke play—which cannot generate Simpson’s Paradoxes—match play produces them all the time. That’s why match play golf does not resemble baseball or soccer, as golf does in stroke play, but instead a sport whose most prestigious tournament—Wimbledon—just concluded. And tennis is the High Church of Simpson’s Paradox.

Simpson’s Paradox, for example, is why many people don’t think Roger Federer is not the greatest tennis player who ever lived. That’s because the Swiss has won 17 major championships, a record, among other career accomplishments. “But,” as Michael Steinberger wrote in the New York Times not long ago, “he has a losing record against [Rafael] Nadal, and a lopsided one at that.” (Nadal leads 23-10.) “How can you be considered the greatest player ever if you were arguably not even the best player of your own era?” Steinberger asks. Heroically, Steinberger attempts to answer that question in favor of Federer—the piece is a marvel of argumentation, where the writer sets up a seemingly-insurmountable rhetorical burden, the aforementioned question, then seeks to overcome it. What’s interesting, though—and in several searches through the Internet I discovered many other pieces tackling more or less the same subject—neither Steinberger nor anyone else attempted what an anonymous blogger did in 2009.

He added up the points.

The blog is called SW19, which is the United Kingdom’s postal code for the district Wimbledon is in. The writer, “Rahul,” is obviously young—he (or she) stopped posting in December of 2009, because of the pressures of college—but yet Rahul did something I have not seen any other tennis journalist attempt: in a post called “Nadal vs. Federer: A Pythagorean Perspective,” Rahul broke down “the Federer/Nadal rivalry on a point-by-point basis, just to see if it really is as lopsided as one would expect.” That is, given Nadal’s dominant win-loss record, the expectation would be that Nadal must win an equally-impressive number of points from Federer.

By July of 2009—the time of publication—Nadal led Federer by 13-7 in terms of their head-to-head record, a 65 percent winning percentage. The two champions had played 4,394 total points across those 20 matches—one of them the 2008 French Open, won by Nadal in straight sets, 6-1, 6-3, 6-0. (Nadal has, as of 2014, now won 9 French Opens, a majors record, while Federer has only won the French once—the very next year after Nadal blew him off the court: 2009.) Now, if there was a straightforward relation between points and wins, Nadal’s percentage of those points ought to be at least somewhat similar to his winning percentage of those matches.

But what Rahul found was this: of the total points, Nadal had won 2,221 and Federer 2,173. Nadal had only beaten Federer on 48 points, total, over their careers to that point, including the smackdown at Roland Garros in 2008. It’s less than one percent of all the points. And if you took that match out of the total, Nadal had won a grand total of eight more points than Federer, out of over 4,000 points and 19 other matches. It is not 65 percent. It is not even 55 percent.

Still, it’s the final nugget that Rahul uncovered that is likely of the most relevance. In three of the twenty matches won by Nadal to that moment in their careers, Federer had actually won more points: two matches in 2006, in Dubai and Rome, and once at the Australian Open in 2009. As Rahul points out, “if Federer had won those three matches, the record would sit at 10-10”—and at least in 2009, nobody would have been talking about Federer’s Achilles heel. I don’t know what the current Pythagorean record stands between the two players at the moment, but it’s interesting that nobody has taken up this detail when discussing Federer’s greatness—though nub of it has been taken up as a serious topic concerning tennis as a whole.

In January in The Atlantic, Professor Ryan Rodenberg of the Florida State University noted that not only did Federer have the 17 Grand Slam titles and the 302 weeks ranked No. 1 in the world, but he also held another distinction: “the worst record among players active since 1990 in so-called ‘Simpson’s Paradox’ matches—those where the loser of the match wins more points than the winner.” Federer’s overall record in these matches is like that of his record against Nadal: not good. The Swiss is only 4-24.

To tennis aficionados, it’s a point that must appear irrelevant—at least, no one until Professor Rodenberg appears to have mentioned it online. To be sure, it does seem questionably relevant: Federer has played nearly 1200 matches professionally; 28 is a pittance. But Rodenberg, along with his co-authors, found that matches like the Isner-Mahut match, where the loser out-scored the winner, constituted “about 4.5 percent” of “61,000 men’s ATP and Grand Slam matches dating back to 1990.” That’s over 3,000 matches—and given that, in exactly zero soccer matches or baseball games over that time frame or any other time, did the losing side net more goals or plate more runs than the other, it at the least raises some questions.

How, after all, is it possible for one side of the net to win—despite losing more of the points? The answer, as Rodenberg puts it, is  “tennis’ decidedly unique scoring system.” In sports like baseball, sports psychologist Allen Fox wrote recently on for the website for the magazine Tennis, “score is cumulative throughout the contest … and whoever has the most points at the end wins.” Sports like tennis or match play golf are different however: in tennis, as Fox says, “[i]f you reach game point and win it, you get the entire game while your opponent gets nothing—all the points he or she won in the game are eliminated.” In the same fashion, once a hole is over in match play golf it doesn’t matter what either competitor scored on that hole: each total is struck out, and the match in effect begins again. What that in turn means is that certain points, certain scoring events, have more value than others: in golf, what matters is the stroke that takes a hole, just as in tennis what matters is the point that takes a game, or a set, or a match. Those points are more valuable than other points—a fact of tremendous importance.

It’s this scoring mechanism that is what allows tennis and match play golf to produce Simpson’s Paradox games: a system whereby the competition as a whole is divided into smaller competitions that function independently of the others. In order to get Simpson’s Paradox results, having a system like this is essential. The $64,000 question however is: just who would design a system like that, a system that can in effect punish a player who does the thing that defines the sport better than the other player more often than the player who doesn’t? It isn’t enough just to say that results like that are uncommon, because why allow that to happen at all? In virtually every other sport, after all, no result like these would ever come up. The only serious answer must be that tennis and match play golf were specifically designed to produce Simpson’s Paradoxes—but why? The only way to seek that answer, I’d say, is to search back through history.

The game we today call tennis in reality is correctly termed “lawn tennis,” which is why the formal name of the organization that sponsors the Wimbledon tournament is the “All England Lawn Tennis and Croquet Club.” The sport is properly called that in order to distinguish it from the older game known as “real tennis” or, in French, Jeu de Paume. Whereas our game of tennis, or lawn tennis, is generally played outdoors and on a single plane, Jeu de Paume is played indoors, in unique, non-standardized courts where strange bounces and funny angles are the norm. And while lawn tennis only came into existence in 1874, Jeu de Paume goes well back into the Middle Ages. “World titles in the sport were first competed in 1740,” as Rolf Potts noted in a piece about the game in the online magazine, The Smart Set, “and have continued to the present day, making Jeu de Paume men’s singles the oldest continuous championship event in sport.” Jeu de Paume, thus, is arguably the oldest sport in the world.

Aside from its antiquity, the game is also, and not unrelatedly, noted for its roots in the ancien regime: “Nearly all French royalty were familiar with the sport from the 13th century on,” as Rolf Potts notes. And not just French royalty: Henry VIII of England is regularly described as a great player by historians. These are not irrelevant facts, because the status of the players of Jeu de Paume in fact may be directly relevant to how tennis is scored today.

“When modern tennis,” writes Potts, “was simplified into its popular form in 1874, it appropriated the scoring system of the ancient French game.” So our game of tennis did not invent its own method of scoring; it merely lifted another game’s method. And that game’s method may be connected to the fact that it was played by aristocrats in the fact that so much about Jeu de Paume is connected to gambling.

“In October of 1532,” Potts reports, Henry VIII lost 50 pounds on tennis matches: “about a thousand times the sum most Englishmen earned in a week.” Anne Boleyn, Henry’s second wife, by some accounts “was betting on a tennis game when Henry’s men arrested her in May of 1536,” while others say that her husband received the news of her execution while he himself was playing a match. Two centuries earlier, in 1355, King John II of France had been recorded paying off a bet with “two lengths of Belgian cloth.” And in Rob Lake’s academic paper, “Real Tennis and the Civilising Process,” published in the academic journal Sport in History, Lake claims that “the game provided opportunities for nobles to engage in conspicuous consumption … through gambling displays.”

So much so, in fact, that Potts also reports that “some have speculated that tennis scoring was based on the gros denier coin, which was said to be worth 15 deniers.” Be that as it may, two facts stand out: the first is that the game’s “gradual slide into obscurity began when fixed games and gambling scandals sullied its reputation in the late 17th century,” and the second that “games are still regulated by a complicated handicapping system … so that each player begins the game with an equal expectation of winning.” So elaborate is that handicap system, in fact, that when Rolf Potts plays the first match of his life, against a club professional who is instructing him, he “was able to play a close game.” Gambling, in seems, was—as Potts says—“intrinsic to Jeu de Paume.” And since the sport still has a handicap system, which is essential to gambling, so it still is.

We can think about why that is by comparing Jeu de Paume to match play golf, which also has an early connection both to feudalism and gambling. As Michael Bohn records in Money Golf: 600 Years Of Bettin’ On Birdies, the “earliest record of a golf bet in Scotland was in 1503,” when on February 3 King James IV paid out 42 shillings to the Earl of Bothwell in “play at the golf.” And as John Paul Newport of the Wall Street Journal writes, “historically all the early recorded competitions—King James IV in 1503, for example, or the Duke of York, later King James II [of England], in 1681—were match play.” That is likely not a coincidence, because the link between the aristocracy, gambling, and match play is not difficult to explain.

In the first place, the link between the nobility and gambling is not difficult to understand since aristocrats were virtually the only people with both money and the time for sport—the opportunity, as a prosecutor would say. “With idle people amusement is the business of life,” as  the London magazine The Spectator noted in 1837; and King James’ bet with the Earl of Bothwell—42 shillings, or a little over £2—would have bought roughly six month’s work from a laborer during the sixteenth century. Not merely that: the aristocracy were practically the only people who, legally speaking, could gamble in during the Renaissance: as Nicholas Tosney notes in a paper for the University of Nevada, Las Vegas in 2010—“Gaming in Britain and America: Some Historical Comparisons”—gambling in England was outlawed in 1541 for anyone not at least a gentleman.

Yet just having the ability does not carry a case. It’s also required to be able to posit a reason—which of course isn’t all that hard to find when it comes to gambling. Aside from the obvious financial inducement, though, aristocratic men had something extra pushing them toward gaming. As the same 1837 Spectator article noted, gambling was widely thought to be “a necessary accomplishment of a young man in fashionable circles.” After all, what better way to demonstrate belonging to the upper classes by that form of conspicuous consumption that buys—nothing? The literature on the subject is so extensive as to not need bothering with trolling out in its entirety: nobles had both the means and the motive to gamble, so it therefore seems reasonable to suppose that a game adopted by gamblers would be ideal for gambling.

And examined closely, match play does have such features. Gambling after all would best explain why match play consists of what John Van der Borght calls “18 one-hole contests.” According to John Paul Newport, that’s so “an awful hole here or there doesn’t spoil the day”—but a better explanation is likely because doing things that way allows the previous hole’s loser to bet again. Multiplying contests obviously increases the opportunity to bet—and thus for a sucker to lose more. And that’s why it is significant that the match play format should have a link to the nobility and gambling: because it helps to demonstrate that the two formats of golf are not just different versions of the same game, but in fact have two different purposes—purposes that are so different they are virtually different sports.

That difference in purpose is likely why, as Newport observes, it isn’t “until the mid-18th century are there records of stroke-play competitions.” One reason for the invention of the stroke play format was, Newport tells us, “to make tournaments involving larger numbers of golfers feasible.” The writer for the Wall Street Journal—make of that connection what you will—presents the new format as simply demanded by the increasing number of players (a sign, though Newport does not mention it, that the game was spreading beyond the boundaries of the nobility). But in reality stroke play was invented to serve a different purpose than match play, a purpose even now recognized by the United States Golf Association.

About the best definition of the purpose of stroke play—and thus, it’s difference from match play—can be found in the reply Sandy Tatum, then the executive director of the United States Golf Association, gave to a reporter at the 1974 U.S. Open at Winged Foot. That tournament would become known as “the Massacre at Winged Foot,” because even the winner, Hale Irwin, finished over par (+7). So when the extent of how tough the golf course was playing became obvious, one reporter asked Tatum if the USGA was trying to embarrass the best players in the world. What Tatum said in reply to the reporter is about as succinct an explanation of the purpose of the U.S. Open, and stroke play, as is possible.

“Our objective is not to humiliate the best golfers in the world,” Tatum said in response to the question: “It’s to identify them.”And identifying the greatest golfers is still the objective of the USGA: That’s why, when Newport went to interview the current executive director of the USGA, Mike Davis, about the difference between stroke play and match play for his article, Davis said “If all you are trying to do is determine who is playing the best over a relatively short period of time, [then] 72 holes of stroke play is more equitable [than match play].” The position of the USGA is clear: if the purpose of the competition is to “identify,” as Tatum said, or “determine,” as Davis said, the best player, then the best format for that purpose is stroke play, and not match play.

One reason why the USGA can know this is that it is obviously not in the interest of gamblers to identify themselves as great players. Consider, for instance, a photo printed along with Golf magazine’s excerpt of Kevin Cook’s book, Titanic Thompson: The Man Who Bet On Everything. The photo depicts one Alvin “Titanic Thompson” Thomas, swinging a club late in life. Born in 1892, Cook says that “Titanic was the last great player to ignore tournament golf”—or stroke play golf, anyway. Not because he couldn’t: Cook says that Byron Nelson, who among other exploits won 11 tournaments on the PGA Tour in a row in the summer of 1945, and thus seems an excellent judge, said “there was ‘no question’ that Titanic could have excelled on Tour, ‘but he didn’t have to.’”—because Titanic “‘was at a higher level, playing for $25,000 a nine while we [Tour players] played for $150.’” Thomas, or Thompson was the greatest of golf gamblers; hence the caption of the photo: “Few golf photos exist of Thompson,” it reads, “for obvious reasons.” Being easily identifiable as a great golfer, after all, is not of much use to a gambler—so a format designed for gambling would have little incentive to “out” better players.

To put it simply then the game of tennis today has the structure that it does today because it descends from a different game—a game whose intent was not to identify the best player, but rather to enable the best player to maximize his profits. Where the example of tennis, or match play golf, should then lead specifically, is to the hypothesis that any point-driven competition that has non-continuous scoring—which is to say divided into sub-competitions whose results are independent of all the others—and where some parts of the competition have a higher value than other parts, ought to raise doubt, at the least, as to the validity of the value of the competition’s results.

The nature of such structures make it elementary to conceal precisely that which the structure is ostensibly designed to reveal: the ultimate value that underlies the whole operation, whether that is the athletic ability of an individual or a team—or something else entirely. Where goal difference and Pythagorean Expectation and stroke play all consolidate scores in order to get at the true value those scoring events represent, tennis’ method and match play divide scores to obscure value.

That’s why match play is so appealing to golf gamblers—it allows the skilled player to hide his talent, and thus maximize income. Conversely, that’s why the U.S. Open uses stroke play: because the USGA wants to reveal the best player. Some formats of play lend themselves to one purpose or the other—and what that leads to is a kind of thought experiment. If the notion advanced here is correct, then there are two kinds of ways a given sport may score itself, and concurrently two different purposes those different means of scoring may serve. If a sport is more like golf’s match play than it is like golf’s stroke play, in short, it can be predicted that it’s likely to be vulnerable to gamblers.

As it happens, it’s widely believed that professional tennis has a gambling problem. “Everyone knows,” said last year’s Wimbledon winner, Andy Murray, “that match-fixing takes place in professional tennis”—all the way back in October of 2007. A story in the Guardian that year summed up the scandal that broke over the sport that August, which began when the world’s largest online betting exchange, Betfair, reported “irregular gambling patterns” on a match between Nikolay Davydenko—once ranked as high as #3 in the world—and Martin Arguello—at the time ranked #87—at the Polish Open. At the end of September 2007, Novak Djokovic—this year’s Wimbledon champion—said “he was offered £10,000 to lose in a tournament in St. Petersburg” the previous year. In late October of 2007—after Murray’s comment to the press—“French undercover police” were “invited into the Paris Masters amid suspicions of match-fixing in tennis.” But what Simpson’s Paradox would tell the police—or tennis’ governing bodies—is that looking for fixed matches is exactly what the cunning gambler would want the authorities to do.

“The appeal of tennis to gamblers,” wrote Louisa Thomas for Grantland earlier this year, “makes total sense” for a number of reasons. One is that “tennis is played everywhere, all the time”: there’s likely a tournament, somewhere in the world, any time anyone feels the urge to bet, unlike a lot of other sports. That ubiquity makes tennis vulnerable to crooked gamblers: as Thomas observes, there are “tens of thousands of professional matches, hundreds of thousands of games, millions of points”—a spread of numbers so wide that the volume alone discourages detection by any authority.

Another reason why tennis should be appealing to gamblers is that “bettors can make wagers during play itself”: you can get online while watching a match and lay down some action. As The Australian reported this year—when a young man was arrested at the Australian Open with an electronic device designed to transmit scores quicker than the official tournament did—there are “websites that allow bets to be laid on individual events such as whether a player faults on serve.” Now, essentially the scam that the man at the Australian Open was arrested for is the same con as depicted in the film The Sting, which itself tells something of a tale about the sport.

But the real scandal of tennis, though perhaps Thomas does not emphasize this enough, is that it is vulnerable to manipulation simply because  “broken into discrete points, games, sets, matches, and tournaments.” It’s a point, however, that one of Professor Rodenberg’s students understands.

What Benjamin Wright—a graduate student in Rodenberg’s department at the Florida State University—knows is that because of tennis’ scoring system, the sport doesn’t need to have crooked players throwing matches to be corrupt. “Governing bodies must be aware,” says Wright—in his master’s thesis, “Best of N Contests: Implications of Simpson’s Paradox in Tennis”—“that since tennis does not use a running score like other sports intentionally losing points, games, and sets is plausible since such acts may not have long-term implications.” In other words, “a player would not need to lose an entire match intentionally.” All that’s necessary—especially since it’s possible to bet on tennis in real time—is for a player to lose “points during specific periods of a match.” All a gambler needs to know, that is, is that a player will throw the second point of the fourth game of the second set—knowledge that is nearly undetectable because under the rules of the game it is entirely possible for a player to shave points without risking a loss.

“Who’s to say,” says Thomas about the undetectability of corruption, a player is “not just having a really rotten day?” But what Thomas doesn’t appear to grasp fully is that the actual disgrace is the question of how a player could be accused of corruption if she has won her match? That’s the real scandal: how even apparently well-trained journalists can miss the point. “Although tennis is perceived as a genteel sport,” wrote Joe Drape of the New York Times about the Davydenko scandal in 2007, “it has always confronted the same problem as other contests based on individual competition like boxing.” That problem, Drape said, is that a “fixer needs to sway only one person, and taking a dive is hard to detect.” Drape is, to be sure, right about what he says—so far as that goes. But Drape does not point out—I think likely because he does not understand—why “taking a dive” is so difficult to unmask in tennis: because it’s possible to throw a point—or a game, or a set—without affecting the outcome of the match.

Now, this is so obviously crooked that the gall of it is simply breathtaking. Yet the reality is simply that, aside from a few very naive people who could probably stand to have a few dollars taken from them by shrewd, and likely Russian, mobsters, no one really loses much by this arrangement. There are far worse scams in the world, and people who bet on tennis are probably not very sympathetic victims. But what knowing what we now know about tennis, and match play golf, allows us to now do is to evaluate all competitions: any contest which has the characteristics we have isolated (non-cumulative scoring, unequal points) will necessarily produce Simpson’s Paradox results. Further, any contest that produces Simpson’s Paradox results does so by design: there’s no reason to add an extra layer of complexity to a competition unless it’s in somebody’s interests. Lastly, since the only reason to add that layer of complexity, and thus produce Simpson’s Paradoxes, is to conceal value, it’s more likely than not that those interests are not entirely legitimate.

Now, it so happens that there is a competition that has those two characteristics and has demonstrably produced at least one paradoxical result: one where the “winner” lost and the “loser” won.

That competition is called an American presidential election.