Rasmussen razzmatazz

David Shor writes:

Rasmussen polls are consistently to the right of other polls, and this is often explained in terms of legitimate differences in methodological minutiae.

4384143265_c6f0bfb6e5.jpg

But there seems to be evidence that Rasmussen’s house effect is much larger when Republicans are behind, and that it appears and disappears quickly at different points in the election cycle.

[More graphs at the above link.]

I don’t know anything about this particular polling organization and haven’t looked at house effects since 1995 (see page 123 of this article, but please note that the numbers for the outlying Harris polls in Figure 1b are off; we didn’t realize our mistake until after the article was published). It seems like a good idea to keep pollsters honest by checking this sort of thing, and I like David Shor’s approach of trying to break down the effect by seeing where it varies. As we always say around here, interactions are important.

5 thoughts on “Rasmussen razzmatazz

  1. We've seen several graphics like this one (on this blog and elsewhere). Can anyone point me to code (R?) to make such a graphic? Thanks.

  2. I've always been a bit suspicious of Rasmussem, for the very reasons you listed. I haven't looked at it carefully, but I noticed the effects just casually.

    This seems pretty simple to resolve though. If Rasmussen and others are going to argue that it results from methodological differences, why not have others simply replicate there methodology? If the results don't show up in the replication by others, discard Rasmussen as garbage.

    As a side note, Mr. Gelman, can you think of any reason methodology would cause this to occur? I can't think of any, but I'm neither a statistician or a political scientist.

  3. I'm a different Rasmusen. It's hard to know whether there's a bias in approval polling, because there's no objective gauge. Is Rasmusen to the right of other polls in election polling, where we know (better) at the end what the right answer is? Does his six-month-in-advance difference from other polls help predict the result, or correlate negatively with it?

  4. There is a simple explanation for this. It's all about the "likely voter" screen. In fact, we can reason about the Rasmussen "likely voter" screen (which is a trade secret, apparently) based on the research that Shor is doing. You should read Shor's post, and I will refer to his findings. First, early in the election season, Rasmussen has a strong house effect. This is easily explained by the fact that no other pollster (that I know of) applies a "likely voter" screen as early as Rasmussen does. Rasmussen's screen must be heavily based on demographics and prior voting history. That produces a strong Republican tilt. As other pollsters start to apply a "likely voter" screen, Rasmussen's numbers start to look more like the other polls (at this point, I'm not making any assertion about accuracy even though Shor assumes as much).

    According to Shor, Rasmussen's house effect "very rapidly [re-]appears right before the Republican National Convention". At this point, the results of the Rasmussen "likely voter" screen must differ from other pollsters (resulting in a more Republican electorate). The house effect diminishes with time but remains statistically significant right up to election day. The other salient fact is that this house effect is more pronounced the further behind the Republican candidate is. Both of those facts could be explained by assuming that Rasmussen is overestimating the likelihood that Republican voters will turn up at the polls.

    Sam Wang demonstrated pretty conclusively that the all the pollsters overestimate the vote share of the losers in blow-out states (at least in the presidential contest). The simple fact is that it is a lot easier for a Vermont Republican to tell the pollster on the phone that he supports McCain than it is to get off his duff and make it to the polls when he knows that there's no chance that McCain will carry Vermont (and vice versa for the Democrat in Utah).

    If your "likely voter" screen is biased towards Republicans demographically and your "enthusiasm" measure is positive-only (you give points for being "fired up" but ignore indicators of apathy), you'll inevitably count a lot of unlikely (dispirited) Republican voters in elections where their candidate is being blown out. Likewise, in 2008, you would have measured a big boost for Republicans right around the convention that tapered off as the election approached.

Comments are closed.