The end of Gerrymandering? Electoral boundaries vs. maths

In 2004 the Supreme Court of the USA was divided about its ability to rule on the Vieth vs. Jubelirer case regarding a Gerrymandering affair in Pennsylvania. In the USA Gerrymandering refers to a redistricting carried out by the current majority in order to facilitate the re-election of its own candidates in the next legislative elections. It is executed in two ways: by packing the bulk of the electors of the opposite party into the minimum number of districts, and by cracking the rest into the maximum possible number of districts. This way, one reduces the number of elected representatives of the opposing party and increases that of its own.

Three judges of the Supreme Court considered that Gerrymandering could potentially be considered a violation of the equal treatment between voters and therefore is contrary to the Federal Constitution and could be examined by the Court. Against them, three other judges, considered as conservatives, concluded that the Constitution places all powers regarding redistricting on the hands of the legislative branch of the state. Accordingly, in the name of the separation of powers, the Supreme Court was not capable of ruling on that matter. The last judge, Anthony Kennedy finally agreed with the conservative judges, but for a particular reason: he considered that Gerrymandering was potentially unconstitutional, but he could not find an objective means to quantify it; he lacked a mathematical measure of the phenomenon, he desired a workable standard.

Put yourself in that situation: if you want to persuade that last judge, you have to turn to mathematics in order to develop an efficient measure of Gerrymandering. During the last decade many scientists have paid attention to such matter to return this year to the Supreme Court with a new case of Gerrymandering—this time in Wisconsin—and to thus be able to win the decisive vote of Anthony Kennedy.

One could think of arguing first against the scandalous distance existing between the result of popular vote and its reflection in the Assembly—for instance, in the 2012 election at Wisconsin, the Republicans won 60 of the 99 seats in the Assembly with only 47 percent of the votes. Yet, such argument will doubtlessly be unconvincing for the judge, for it purely and simply denies the American electoral system: the representatives must be elected through a first-past- the-post system in each district, and this argument contravenes this requirement; we would be substituting a constituency system for a proportional system. Regardless of what we think of it, the constituency system prevails in the USA, and this must be taken into account.

Illustration by Klifton Kleinmann.

Efficiency gap and wasted votes

To give an answer to the problem, Nicholas Stephanopoulos, a Law professor at the University of Chicago, and his colleague Eric McGhee argued in favour of the efficiency gap method as a measurement of Gerrymandering in a paper published in 2015.1

Their index is based on the amount of wasted votes per circumscription and party. For instance, for party A, the number of wasted votes is the sum of the votes A has received in the districts it has lost plus the votes above 50 percent in the districts where it has won (see Table). In sum, we measure the number of wasted votes within the State and we compare the score obtained by each party: that is the efficiency gap. The idea is that it is a sign of strong Gerrymandering if the number of Party A’s wasted votes is significantly higher than that of Party B.

Total votes per party Wasted votes per party
District A B Winner A B
I 33 67 B 33 67-50=17
II 33 67 B 33 67-50=17
III 46 54 B 46 54-50=4
IV 90 10 A 90-50=40 10
Total 202 198 152 48

Efficiency gap = Party A’s wasted votes – Party B’s wasted votes/Total number of votes = 26%

Example of efficiency gap calculation in a fictional State with four electoral districts (adapted from Stephanopoulos & McGhee, 2015). Each circumscription contains exactly 100 people. After the elections, in spite of a slight majority of votes for Party A, Party B wins most of the seats. Wasted votes are calculated depending on the winner of each jurisdiction. For instance, in the first district Party B was won by 67 to 33 votes; 17 wasted votes are ascribed to party B (all those above the 50 votes needed for the majority), whilst 33 are to Party A. The efficiency gap is then calculated according to the formula depending on each party’s wasted votes.

Stephanopoulos and McGhee then suggest a threshold over which Gerrymandering can be defined: if the efficiency gap is above 8 percent, a strong suspicion of Gerrymandering can arise.

The main quality of this approach is its simplicity (if you are not totally convinced, reread Table, I am sure you will see the logic in it). This is why it has been chosen by the current claimants. Besides, it seems to be able to take into account real examples of Gerrymandering, at least in the cases studied by Stephanopoulos and McGhee.

On the other hand, it suffers from an extremely problematic drawback (to which I have not been able to find a solution).2 Imagine a State in which the Republicans obtain 51 percent and the Democrats 49 percent of the votes, and in which the two groups are distributed along the territory in such a perfectly homogeneous manner that regardless of the electoral boundaries there is always a slight Republican majority. In that situation, Gerrymandering is not possible: all seats would go to the Republican independently of the electoral boundaries. And yet, the efficiency gap would be of 50 percent, its maximum.

Still, more shocking than such inconvenience, which I find very significant, is the fact that it is not acknowledged by the authors in their list of inconveniences. This example can be considered as highly hypothetical and it does not influence the measurement’s pertinence. Yet, for a mathematician like me, this simple example poses serious problems to the efficiency gap.


In 1812, the very partial redistricting of Massachusetts under the government of Elbridge Gerry gave place to very strange circumscription shapes, up to the point that the Boston Gazette could find in one of them, that of South Essex, the shape of a legendary salamander — which owed him the name Gerry-mander (see Illustration).

Author: Elkanah Tisdale (1771-1835). Initially published in the Boston Centinel, 1812. Source.

Building on the tendency of Gerrymanders to suggest very strange boundaries, physicists Mattingly and Vaughn have brought forward another measurement that lies not on the results of the elections but on the shape of constituencies.

According to American Law, legal limits to redistricting imply that the resulting constituencies must be continuous and roughly contain the same population. They have modelled such constraints and they have added to them a factor of their own, that of compactness: constituencies must be as compact as possible — mathematically, they must be as close as possible to a disk—instead of extending in complicated shapes like Gerry’s Salamander.3

Thanks to that mathematical description they are able to randomly create hundreds of possible electoral boundaries that answer to these criteria. The maps thus created are consequently “blind” to any other consideration, racial or political. Only the balanced distribution of the population and the compactness of the circumscriptions are taken into account. They can also measure the extent to which randomly designed boundaries deviate in terms of compactness and population from those designed by politicians.

Applied to the US Congress elections, 2012 in North Carolina, the results are unquestionable: in none of the maps simulated through this method did the Democrats get less than 6 out of 13 seats (the average on the aggregate of the simulations being 8), whilst in reality they only got 4.

Being a method based on an equitable districting of the territory, the index it provides is not tied to particular elections, which is a certain advantage compared to the efficiency gap approach. Yet, such method is, regarding certain aspects, quite limited. First of all, in contrast to the efficiency gap, which uses school math and can be calculated on the back of an envelope, to understand the metrics of Mattingly and his colleagues, one requires a knowledge of Markov chain Monte Carlo algorithms, programming, Gibbs measure and other transition nodes (I had told you that the efficiency gap criterion was in the end quite simple).

The judges of the Supreme Court would be condemned to simply trust on the foundations of this measurement without being able to verify it by themselves. To this must be added the fact that Mattingly’s metrics must set a more or less arbitrary parameter defining a compromise concerning the extent to which circumscriptions will be more balanced regarding their population or their compactness.


In June 2017, Judge Kennedy ranked among the liberal4 judges and accepted to rule Wisconsin’s case of alleged Gerrymandering. The complainants brought to the fore the efficiency gap, much more simple than other potential criteria. As of now (October 2017), the case is being examined by the Supreme Court.

The reader will have understood that, in spite of its simplicity, I am not a fervent defender of the efficiency gap approach. That is because of its inability to correctly respond to the counter- example I have presented. However, it is that approach that seems to have gained favour before justice. Even if I think Mattingly and Vaughn’ approach is interesting to grasp certain aspects of Gerrymandering, it is not as general and simple as to become Judge Kennedy’s desired workable standard. It is nonetheless interesting to see that a single phenomenon can be tackled from two radically different perspectives: wasted votes on the one hand and redistricting on the other. Besides, it is not the verdict of the judges of the Supreme Court that will end the mathematical debates around these questions, as researchers are still working on better measurements of the phenomenon.5

Through this example I intend to present an aspect of mathematics that is usually poorly known by the general public, that of modelling. It is always about translating into precise mathematical terms a phenomenon of real life with relatively vague contours, and to apply measurements to the different notions in order to produce a means to apprehend reality in the most objective possible manner. The daily life of the mathematician is as much about demonstrating theorems than it is about putting good concepts into place.


I would like to thank Guillaume, one of the reviews of this article. One of his very relevant remark greatly improved the quality of the article.


General press articles on political aspects of Gerrymandering:

  • Bazelon, E. (2017). “The New Front in the Gerrymandering Wars: Democracy vs. Math”. The New York Times.

  • Chan, D. (2017). “A Summer School for Mathematicians Fed Up with Gerrymandering”. The New Yorker.

Articles about Gerrymandering measurements:

  • Arnold, C. (2017). “The mathematicians who want to save democracy”. Nature News 546, 200.

  • Bangia, S., Graves, C.V., Herschlag, G., Kang, H.S., Luo, J., Mattingly, J.C., and Ravier, R. (2017). “Redistricting: Drawing the Line”. ArXiv:1704.03360 [Stat].

  • Mattingly, J.C., and Vaughn, C. (2014). “Redistricting and the Will of the People”. ArXiv:1410.8796 [Physics].

  • Stephanopoulos, N.O., and McGhee, E.M. (2015). “Partisan Gerrymandering and the Efficiency Gap”. The University of Chicago Law Review 82, 831–900.


  1. (Stephanopoulos and McGhee, 2015) 

  2. I raised this concern in this topic on Stackexchange. Interested readers can follow the discussion there. 

  3. Compactness may refer to several close but distinct notions in mathematics. The exact definition referred to here can be found in (Mattingly and Vaughn, 2014) and (Arnold, 2017). 

  4. Liberal in the sense it is used in the USA, in opposition to conservative. 

  5. See Chan (2017).