Predicting elections using the “most important issue” question

Dan Goldstein points to a draft article by Andreas Graefe and J. Scott Armstrong:

We [Graefe and Armstrong] used the take-the-best heuristic in combination with simple linear regression to develop the PollyMIP model for forecasting the incumbent’s two-party share of the popular vote in U.S. presidential elections. The model is based on the theory of single-issue voting: voters will select the candidate who is expected to do the best job in handling the issue that is most important to them. We used cross-validation to calculate 1,000 out-of-sample forecasts for the last ten U.S. presidential elections from 1972 to 2008 (100 per election year). PollyMIP correctly predicted the winner of the popular vote in 97% of all forecasts. For the last six elections, it yielded a higher number of correct predictions of the election winner than the Iowa Electronic Markets (IEM), although the IEM provided more accurate predictions of the actual vote-shares. In predicting the two-party popular vote shares for the last three elections from 2000 to 2008, the model provided out-of-sample forecasts that were competitive with those from established econometric models. PollyMIP contributes new information to the forecasts; in combination with other methods, it led to substantial improvements in accuracy. Finally, in using information about the frequency of Internet searches, it allows for easy tracking of issue importance and early identification of emerging issues and, thus, can suggest which issues candidates should stress in their campaign.

Interesting stuff. I have a few comments:

1. The forecasting method is described on page 16. It gives a new prediction every day, to the extent that the majority judgment of the most important issue changes or if attitudes change on candidate performance on these issues. Their statistical model doesn’t seem consistent with their psychological story, though: in their psychological model, individual voters decide based on single issues, but in their statistical model, they use the same issue for all voters.

2. They should show some time series plots of their estimates during each campaign, especially for elections such as 1988 where the lead in the polls changed. I was surprised to see that the only time series plot in the article was a graph of search-engine data that weren’t actually used in the forecasts. Of course, as a statistician, I’m almost always going to ask for more graphs.

3. I think they are making a big mistake by evaluating their method based on how well it predicts the popular vote winner. Two of the elections in their database–1976 and 2000–were essentially tied. Their model predicted the winner in those two years, but that counts for nothing to me. They’re predicting a coin flip. And 1972 and 1984 were never close, so you don’t get much credit for predicting the winner of those, either. So this is really leaving them with six elections to forecast: 1980, 1988, 1992, 1996, 2004, and 2008.

4. The authors make a good point in this and their other papers about the virtue of combining information and the use of simple averages and scores as forecasts. This is something that’s well known in the judgment and decision making literature but maybe not so much elsewhere.

5. Even if the above paper has problems from a political science point of view, maybe it is moving us in some way toward a useful convergence of ideas from forecasting, political science, and psychology.