Some thoughts on “the keys to the White House”

Vivek Mohta asks (in a comment here) the following:

The conclusion [of some research on election forecasting] seems to be that presidential election results turn primarily on the *performance* of the party controlling the White House. The political views of and campaigning by the challenging candidate (within historical norms) have little to no impact on results.

The most recent paper applying this method is The Keys to the White House: Forecast for 2008. I haven’t yet looked at the original paper from 1982 where the method is developed. But there was a reference to his work in the Operations Research Today: “His method is based on a statistical pattern recognition algorithm for predicting earthquakes, implemented by Russian seismologist Volodia Keilis-Borok. In English-language terminology, the technique most closely resembles kernel discriminant function analysis.”

My thoughts:

1. The bit about the seismologist is perhaps of historical interest but not so relevant for our understanding. It’s ok to just think about linear and regression.

2. My favorite single thing written on election forecasting is Steven Rosenstone’s 1984 book, Forecasting Presidential Elections. He (and later researchers such as Campbell and Erikson) indeed argue (and is supported by data) that the national election outcome is largely predictable from the recent performance of the economy, with state-to-state variation being mostly consistent from election to election after controlling for home-state and region effects.

3. Rosenstone finds that candidates do benefit slightly by being political moderates–but it’s only a couple of percentage points, so not a huge effect.

4. Campaigns do have effects. However, presidential elections tend to be closely contested in terms of resources, and so the two sides’ campaigns pretty much cancel each other out.

5. The Lichtman stuff is ok in the sense of generally getting things right without having to be quantitative–but it has one thing that really bugs me, which is the attempt to predict the winner of every election. In the past 50 years, there have been 4 elections that have been essentially tied in the final vote: 1960, 1968, 1976, and 2000. (You could throw 2004 in there too.) It’s meaningless to say that a forecasting method predicts the winner correctly (or incorrectly) in these cases. And from a statistical point of view, you don’t want to adapt your model to fit these tossups–it’s just an invitation to overfitting.

To put it another way: suppose his method mispredicted 1960, 1968, and 1976. Would I think any less of this method? No. A method that predicts vote share (such as used by political scientists) could get credit from these close elections by predicting the vote share with high accuracy. Again, I see virtue in the simplicity of Lichtman’s method, but let’s be careful in how to evaluate it.

(I made the above point here (see the last full paragraph on page 120) in my 1993 review of Lewis-Beck and Rice’s book on forecasting elections.)

6. If your goal really is forecasting, and you have the technical sophistication of an operations researcher, you should definitely be forecasting vote share (at the national level, or even better, by state) rather than just the winner. Lots of information gets lost by converting a continuous outcome into binary.

2 thoughts on “Some thoughts on “the keys to the White House”

  1. Sound advice on overfitting the tossups, but it raises a fundamental conundrum. Plenty of elections weren't close. No one cares if you have a model that gets every runaway election right. So doesn't that leave you in the position of making a model which is only useful in the moderately close elections, whatever that means. To the contrary, I would argue that the close elections are exactly what you ought to be modelling, and that your metric ought not to be a binary classification measure (whose justification I never really understood anyway), but whether or not any of the close elections were predicted by the model to be close elections. Assuming logit or probit models, the close ones ought to have predicted values near zero, and it's not overfitting to try and get the predicted values close to zero. Your vote-shares model obviously steps in this direction…

  2. Thanks for your helpful thoughts and for the Rosenstone reference.

    I agree that you lose information by converting a continuous measure to binary. It seems that the Lichtman treatment of the independent variables as binary allows him to use qualitative historical data for about 20 additional elections going back to 1860. However, it requires him to convert richer continuous data to binary post-1948.

Comments are closed.