An Interesting Alternate Analysis

If you liked my post, “On the Fairness of the 2000 and 2016 Presidential Elections“, go check out “Fixing a Presidential Election“. They start with a similar analysis discounting the Electoral College votes “not based on population”. They go much deeper, including looking at the other instance of electoral-popular mismatch, 1888, which was a mess of vote buying corruption. I’ll withhold from commenting on their arguments about rigging in 2016. I prefer to let them make those for themselves.

Presidental Elections Efficiency Gap


In the recent New York Times article, “Judges Find Wisconsin Redistricting Unfairly Favored Republicans” (, a panel of judges rejected Wisconsin’s state 2010 redrawing of State Assembly districts as an unconstitutional partisan gerrymander. What caught my eye in this story was the acceptance of a metric called the “efficiency gap” for measuring the fairness of voting districts directly from election data. Just consider the two major parties. In each district, each party will have a winner and loser. All of the loser’s votes are considered “wasted”, in that they didn’t result in a seat. Similarly, the winners votes beyond one more than the loser’s are all “wasted” as well, since they would have still won without those extra votes. If you subtract the total wasted votes for one party in all districts from the total wasted votes for the other party, and divide by the total cast votes, you obtain the efficiency gap. The ideal efficiency gap for fairly drawn districts would be very close to zero. i.e., each party ends up almost the same number of wasted votes in a set of closely contested races.

I wondered about applying this idea as a measure of fairness in the U.S. presidential election. Substitute states for districts. When I run the numbers (ignoring for simplicity the fact that Nebraska and Maine don’t follow the same winner-takes-all-electors rule as the other states). I found that for the 2016 election, for ~130.2M total votes, Red had ~21M wasted votes and Blue had ~41.5M wasted votes, giving an efficiency gap of 15.8%. (A Jupyter notebook with the actual calculations is available at What about the previous election, where Blue won the popular vote and the electoral college? One would intuitively expect a lower efficiency gap when popular votes and the electoral college winner align. Well, in 2012, for 123.7M total votes, Red had 41.1M wasted votes and blue had 28.2M wasted votes. The magnitude of wasted votes was indeed closer, and the efficiency gap works out to be 10.4%.

However, the article talks about the unfairness threshold for the efficiency gap as applied to legislative districting. Studying 4 decades worth of state redistricting plans and election data, they determined that any gap in excess of 7% led to a situation where the party in power maintained that power. In 2012, even when the Electoral College result was matched by a popular majority, the efficiency gap was a quite large value of 10.4%. The 2016 gap of 15.8% is interesting: In Wisconsin, the State Assembly 2011 districting that was rejected partly based on an increase in the efficiency gap from 11.69% to 13%, both of which are lower than 15.8%.

I admit the comparison is a bit apples and oranges. The interpretation of the gap is not as straightforward in presidential races, since the “districts” don’t ever get redrawn. The conclusion I would draw, though, is that the two-party dominated system we’ve had since the 1880s where winner-takes-all-the-electors, is fundamentally serving only stability of the two major parties. I would personally prefer a system closer to what is actually described in the Constitution, where the states nominate electors to the college every 4 years, and those electors deliberate in a convention-like process to elect a president and vice-president. I would be happier if we, the citizens, elected representatives at the state and federal level, and the President was selected indirectly by our state legislatures via the Electoral College. I really don’t enjoy the quadrennial circus of U.S. Presidential elections as they currently are run.

On the Fairness of the 2000 and 2016 Presidential Elections

What follows is an interesting thought experiment I thought I’d share. The two major candidates had roughly equal shares of the popular vote. (Red=47.3%, Blue=47.8%). What if the system was modified to take away the 2 bonus “senatorial” electoral votes from each state (and DC)? Well: Red votes => 306-2*30=246; Blue votes => 232-2*21=190. Red still gets the same disproportional amount of electoral votes! (Source:

Here’s what I mean by same: Define D, the “distortion factor” in favor of the winner, as the ratio of received electoral votes to the number they would have had if electoral votes were perfectly proportional to popular vote. So D[Red, actual]=306/(.473*538)≈1.20 and D[Blue, actual]≈0.90. D[Red, modified]=246/(0.473*436)≈1.19, and D[Blue, modified]=190/(.478*436)≈0.91, which are essentially the same as before. You can also define G, the unfairness gap, as |D[red]-D[blue]|. G[actual]≈0.30, and G[modified]≈0.28, so we did manage to design a slightly more fair electoral tally.

I think that assigning electoral votes by congressional district would probably barely budge this result. The current House has a 247-188 Red majority, despite the fact that in 2014, Blue had a greater number of popular votes for House seats.

By contrast, in 2000, (the most recent other example of electoral college-popular vote mismatch) Red and Blue had similar proportions of the popular vote (47.9% Red, 48.4% Blue), but with Red only winning 271-266 in the electoral college (one elector abstained, Source: D[Red, actual]=277/(0.479*538)≈1.07 and D[Blue, actual]=266/(0.484*538)≈1.02. G[actual, 2000]=0.05, a lot less unfair than this year, where G is approximately 0.30. In addition, it was still the case that Red had 30 states, giving 60 “senatorial” electoral votes; likewise Blue had 20 states and DC, just like in 2016. Run the same math as above and you get a Blue 211-222 electoral victory. D[red, modified]=211/(0.479*436)≈1.01 and D[blue, modified]=222/(.484*436)≈1.05. G[modified]≈0.04…so we only slightly reduced unfairness, however by making it unfair in Blue’s favor, and just happening to match up with the popular result.

The difference, of course, is that Red picked up some of the bigger states (by population) in 2016 that belonged to Blue in 2000, while Blue picked up fewer of Red’s big states. While it is clear 2016 was more unfair, as measured by D (distortion) and G (distortion gap, a.k.a., unfairness gap), the electoral college result is completely robust due to Red’s capturing of majority of the right states. Notwithstanding any concerns you may have about the particular Red candidate this year, if you accept the electoral college at all, I would say that Red’s victory this year is robust and legitimate.

Debug an Ubuntu/Debian Tomcat User Instance

tomcatI sometimes work on web application code that I test by deploying into a Tomcat servlet container. I develop on Ubuntu-based systems, and like to use the tomcat7-user (and/or tomcat8-user) package for testing. I believe this package is also available on Debian systems.

$ sudo apt-get install tomcat7-user

Once installed, a new command is available for creating local Tomcat instances. It saves me from dealing with a globally configured Tomcat server on my system:

$ tomcat7-create-instance my-tomcat
$ cd my-tomcat/bin
$ ls
$ cat
export CATALINA_BASE="/home/dale/my-tomcat"
echo "Tomcat started"

Of course, the CATALINA_BASE value in your script will be slightly different. Sometimes I want the ability to connect an interactive debugger to my application’s running server-side code. The provisioned startup script doesn’t provide an easy way to enable JPDA so that I can do this. Let’s create a script that will:

$ cp

Use your favorite editor to modify so it looks as follows:

export CATALINA_BASE="/home/dale/my-tomcat"
/usr/share/tomcat7/bin/ jpda start 
echo "Tomcat started in debug mode" 

I use Eclipse as my development environment. If you do, too, then you can follow the useful guide at to connect the interactive debugger to your application.

Recursively find all Symbolic Links with PowerShell

logo-horizontalI just started using SyncThing, and have found it to be a powerful way to keep the files on my various personal computers in sync. I like this solution. I sync on my own private network, avoiding privacy-risking cloud services. I discovered fairly quickly that SyncThing will give warning messages about any symbolic links it encounters, letting you know it won’t try to sync those. In my case, this was shortcut folders like My Videos and My Music in my ~/Documents folder.

powershellThis PowerShell script will let you discover any such symbolic links recursively under a given folder:

if (-not (Test-Path $Path -PathType 'Container'))
  throw "$($Path) is not a valid folder"
$Current=Get-Item .
function Test-ReparsePoint($File) {
  if ([bool]($File.Attributes -band [IO.FileAttributes]::ReparsePoint)) {
  } else {
cd $Path
# Recurse through all files and folders, suppressing error messages.
# Return any file/folder that is actually a symbolic link.
ls -Force -Recurse -ErrorAction SilentlyContinue | ? { Test-ReparsePoint($_) }
cd $Current

The function in this script was derived by from the answer given by Keith Hill at If you save the above script as ~/scripts/FindSymLinks.ps1, here is what a sample session looks like:

PS C:\Users\Dad\scripts> .\FindSymLinks.ps1 ..\Documents
    Directory: C:\Users\Dad\Documents
Mode               LastWriteTime     Length Name
----               -------------     ------ ----
d--hs       10/24/2009   2:09 PM           My Music
d--hs       10/24/2009   2:09 PM           My Pictures
d--hs       10/24/2009   2:09 PM           My Videos

I’m Making My PhD Thesis Available Online

GitHub-Mark-120px-plusOver 10 years ago, I defended my PhD thesis. Due to my moving to another state shortly thereafter, possibly combined with bureaucratic confusion at my alma mater, I never received bound copies of my dissertation. I have also lost any electronic copies I may have once had in the intervening time. Recently, though, my employer started subscribing to a dissertation service, and I was able to download a scanned version of my dissertation.

tumblr_lgabq4NUsR1qb1aw7o1_500Since I had the foresight to apply an open access license to my document, I am creating a web site on GitHub which will be an electronically accessible version of my thesis document. I also have some data backup disks lying around, and intend to upload that data as supplemental material. I have a large family, and not much spare time, but I am making slow progress. The front matter is complete and the first chapter is almost complete. The site is here:
Particle Decay Branching Ratios for States of Astrophysical Importance in 19Ne