How to think about instrumental variables when you get confused

What with all this discussion of causal inference, I thought I’d rerun a blog entry from a couple years ago about my personal trick for understanding instrumental variables:

“Instrumental variables” is an important technique in applied statistics and econometrics but it can get confusing. See here for our summary (in particular, you can take a look at chapter 10, but Chapter 9 would help too).

Now an example. Piero spoke in our seminar on the effects of defamation laws on reporting of corruption in Mexico. In the basic analysis, he found that, in the states where defamation laws are more punitive, there is less reporting of corruption, which suggests a chilling effect of the laws. But there are the usual worries about correlation-is-not-causation, and so Piero did a more elaborate instrumental variables analysis using the severity of homicide penalties as an instrument.

We had a long discussion about this in the seminar. I originally felt that “severity of homicide penalties” was the wackiest instrument in the world, but Piero convinced me that it was reasonable as a proxy for some measure of general punitiveness of the justice system. I said that if it’s viewed as a proxy in this way, I’d prefer to use a measurement-error model, but I can see the basic idea.

Still, though, there was something bothering me. So I decided to go back to basics and use my trick for understanding instrumental variables. It goes like this:

The trick: how to think about IV’s without getting too confused

Suppose z is your instrument, T is your treatment, and y is your outcome. So the causal model is z -> T -> y. The trick is to think of (T,y) as a joint outcome and to think of the effect of z on each. For example, an increase of 1 in z is associated with an increase of 0.8 in T and an increase of 10 in y. The usual “instrumental variables” summary is to just say the estimated effect of T on y is 10/0.8=12.5, but I’d rather just keep it separate and report the effects on T and y separately.

In Piero’s example, this translates into two statements: (a) States with higher penalties for murder had higher penalties for defamation, and (b) States with higher penalties for murder had less reporting of corruption.

Fine. But I don’t see how this adds anything at all to my understanding of the defamation/corruption relationship, beyond what I learned from his simpler finding: States with higher penalties for defamation had less reporting of corruption.

In summary . . .

If there’s any problem with the simple correlation, I see the same problems with the more elaborate analysis–the pair of correlations which is given the label “instrumental variables analysis.” I’m not opposed to instrumental variables in general, but when I get stuck, I find it extremely helpful to go back and see what I’ve learned from separately thinking about the correlation of z with T, and the correlation of z with y. Since that’s ultimately what instrumental variables analysis is doing.

4 thoughts on “How to think about instrumental variables when you get confused

  1. Re whether it adds "anything at all" — speaking as a relative novice here, doesn't it add more accuracy to the process? I.e., a correlation between T and Y may appear to be too large measured directly, whether because of selection effects or because of something unobserved in the error term, etc. So you look for an instrument Z that somehow affects T and thus Y, but without those selection effects or unobserved factors.

  2. You could you the latent variable "severity of punishments" as an instrument for the effect of defamation on corruption charges. With only two indicators for this latent variable as in the above example, this is not identified. But if you add only one more indicator, for example severity of tax fraud laws, then you would have 2 degrees of freedom.

    So if you add another indicator you can formulate an instrumental variables model in terms of causal relationships rather than just correlations using the latent variable as an instrument.

  3. Stuart: The IV analysis adds nothing in terms of accuracy or statistical precision; it's merely a reinterpretation of regression coefficients obtained from the joint-outcome analysis. If it adds "anything at all," it's in the interpretation. But that's the part I don't buy.

    Daniel: I'm sure you're right about the statistical identification–if you buy the story of the model. I find this confusing enough that I prefer the direct joint-outcome interpretation, which is why I feel more comfortable stopping right there.

  4. I must be misunderstanding this then. It seems that everything I read on instrumental variables talks about bias in the OLS regression model because the independent variable is correlated with the error term.

Comments are closed.