Ethics and statistics in development research

From Bannerjee and Duflo, “The Experimental Approach to Development Economics,” Annual Review of Economics (2009):

One issue with the explicit acknowledgment of randomization as a fair way to allocate the program is that implementers may find that the easiest way to present it to the community is to say that an expansion of the program is planned for the control areas in the future (especially when such is indeed the case, as in phased-in design).

I can’t quite figure out whether Bannerjee and Duflo are saying that they would lie and tell people that an expansion is planned when it isn’t, or whether they’re deploring that other people do it.

I’m not bothered by a lot of the deception in experimental research–for example, I think the Milgram obedience experiment was just fine–but somehow the above deception bothers me. It just seems wrong to tell people that an expansion is planned if it’s not.

P.S. Overall the article is pretty good. My only real problem with it is that when discussing data analysis, they pretty much ignore the statistical literature and just look at econometrics. In the long run, that’s fine—any relevant developments in statistics should eventually make their way over to the econometrics literature. But for now I think it’s a drawback in that it encourages a focus on theory and testing rather than modeling and scientific understanding.

Here are the titles of some of the cited papers:

Bootstrap tests for distributional treatment effects in instrumental variables models
Nonparametric tests for treatment effect heterogeneity
Testing the correlated random coefficient model
Asymptotics for statistical decision rules

Most of things in the paper, and most of the references, are applied rather than theoretical, so I’m not claiming that Bannerjee and Duflo are ivory-tower theorists. Rather, I’m suggesting that their statistical methods might not be allowing them to get the most out of their data–and that they’re looking in the wrong place when researching better methods. The problem, I think, is that they (like many economists) think of statistical methods not as a tool for learning but as a tool for rigor. So they gravitate toward math-heavy methods based on testing, asymptotics, and abstract theories, rather than toward complex modeling. The result is a disconnect between statistical methods and applied goals.

15 thoughts on “Ethics and statistics in development research

  1. When you say complex modeling, how complex do you mean. If you could add a small number of things to econometrics, what would you add? Is your handy statistical lexicon list a good place to start? Is there anything missing from it? Is your ARM book missing anything of this sort?

  2. Why expand the program if they have no idea yet whether (or to what extent) it works? They could have said that there are limits to the number of sites they can implement, and that the fairest way to select the sites is to do so randomly.

  3. "The problem, I think, is that they (like many economists) think of statistical methods not as a tool for learning but as a tool for rigor. So they gravitate toward math-heavy methods based on testing, asymptotics, and abstract theories, rather than toward complex modeling."

    Bravo! I have never seen someone put so clearly the way economists use empirical methods. Maybe economists themselves – and many political scientists – will even agree with your description but I doubt with your conclusion: "The result is a disconnect between statistical methods and applied goals." See, for instance, the paper from Acemoglu in this symposium here:

    http://www.aeaweb.org/issue.php?journal=JEP&volum

    At the end of the day I think that many people – including possibly me! – simple don't know well was is the complex modeling that you refer to and how can it be applied to social research.

  4. I'm flabbergasted by the "not troubled by deception" comment. Just a couple of months ago you were not just troubled, you were outraged, that someone had pretended to want to meet with you to get some advice on their research, where the pretense was part of their project. Merely having to decide whether to say "yes" or "no" to a deceptive request — the student wasn't actually going to meet with you — got you in a lather. And it's not that you were bothered by having a student request your help, you're pretty generous that way and I've never heard you complain about a sincere request for help, even the ones you decline. No, you were bothered by the deception (which is ironic, considering you were helping the student merely by responding to their request). Anyway, I'm calling "bullshit" on the "not bothered by deception" claim. Or perhaps you just mean that you're not bothered when _other_ people are deceived, I guess that is consistent with the facts too!

  5. Phil:

    I guess it doesn't take much to flabbergast you. . . .

    Just to clarify things, here's a quote from my blog on May 16:

    As noted in my earlier blog entries, I have no problem whatsoever with the study's use of deception.

    And, from May 6:

    The issue isn't the deception, it's that the participation was involuntary.

    Also this:

    I am not bothered by deception in psychology experiments, from Milgram on down.

    So, yeah, I'm not bothered by deception. I think I've been pretty clear on that point!

  6. This post is intriguing, but if you have an extra moment, I think it would be helpful (especially for economists) if you provided a concrete example.

    Thanks.

  7. OK, I obviously should have reread all that stuff from back then. I didn't understand your objection the first time, which I suppose is why I didn't stick. You say you don't object to deception, fine. And you don't object to being asked a brief email question about whether you're willing to participate in a survey or study. And you don't object to being asked for help by a student. And yet, you object to being asked for help by a student, if the answer to whether you're willing to meet with the student is, deceptively, exactly the help that is needed.

    OK, I have just gone back and reread the whole thread. I still don't understand your objection at all. Maybe I misunderstood something: you didn't actually meet with the student, right? Your entire "participation", until you wrote your email response to the profs, was to read an email and perhaps write a brief response indicating whether you were _willing_ to meet with a student?

    How are they supposed to ask you whether you consent to being in their study? Suppose a student sent you an email saying "I'd like to speak with you briefly next week to see if you're willing to be in a study." Would you object to receiving such an email, if they would in fact like to speak with you briefly to see if you're willing to be in a study? If you would object to this, then what you seem to be saying is that nobody should conduct surveys, ever, because merely asking someone if they might be willing to participate is an involuntary waste of their time.

    If you would not object to being asked to participate, as long as they are sincere, but you do object to being asked to participate if they are not sincere (as in the case in question) then it seems to me that you are indeed objecting to the deception. You seem to be saying that sending an email that says "I'd like to speak with you for ten minutes" is fine if sincere but not if insincere. And yet, it's not the insincerity that bothers you. So what is it?

    Perhaps you are taking an extreme "human subjects" position that it's wrong — indeed, "abusive" is the word you use — to analyze any data collected from people without their consent, but it's hard for me to believe that you really feel that way. Is it really "abusive" to study survey non-response, for example? Even someone _not_ returning a survey is providing a data point.

    What if I want to study the relationship between the length of a survey and the response rate, so I send out different surveys — a postcard, a single page, a small booklet — full of innocuous questions, with an explanation that I am a student doing sociological research and that I'd appreciate it if you would respond to my survey. I then study the nonresponse rate versus survey length. I do not provide compensation. Am I being "abusive"? I think you'll say Yes to this (i.e. it's "abusive"), whereas I would say No. But perhaps I'm wrong about your response.

    To get back to the issue of using data from people without their consent, here's something you have done: you once had a student count how many bikes and how many cars pass a variety of intersections in Berkeley, to try to estimate the effectiveness of Berkeley's bike lane system. Was that wrong, because you had not asked each of the drivers and bikers to participate? You didn't think so at the time, but perhaps you think so now?

    I guess I'm no longer flabbergasted — OK, you say you don't, and never did, object to deception — but I'm still perplexed. Maybe this post isn't the right place for this discussion, perhaps it belongs in a broader discussion of the ethics of human subjects research. I just still don't have any idea what you think is "abusive" about that previous research, which seems to me to stand out only by virtue of the fact that the initial email contact was deceptive.

  8. Phil:

    I think they should have compensated me (and the other participants in the survey) for our time. I feel this about surveys in general and have said so many times on this blog. In your example with the postcard etc., yes, I think it would be only right to compensate the participants for their time. When my student counted bikes, this was a little different: it did not bother the bicyclists, drivers, etc.. I don't see the need to compensate someone for merely being passively observed.

    Finally, yes, the researchers in this study could have contacted me and asked in a vague way if I were willing to participate in their study. And I would then have deleted their email and never thought about it again. Because it is my job to talk with students, and it is not my job to participate in studies, I act differently to requests from students than to requests from researchers.

    Anyway, to bring this back to the subject of the present blog entry: Ethics is subjective, and I understand that you (and others) do not find that earlier study objectionable whereas I (and others) did. What struck me about the Bannerjee and Duflo article above was that they didn't even seem to consider that there might be ethical problems with telling people that the program would be expanded. And I'm not saying that Bannerjee and Duflo are necessarily doing something wrong, just that it "bothers me."

  9. Well, I'll agree with this implicit principle: You have a right to be outraged by whatever you choose. I think you have chosen a strange thing to be outraged by, but, hey, go for it.

  10. Andrew, I do this kind of research, and I think you misunderstand the quote. It has nothing to do with lying about getting a future program. What they mean is the control group WILL get the program but it is better for research if they don't know this in advance. The fear is they will change their behavior based on this knowledge. For example if there is a housing program that goes to the treatment group, the control group would do nothing to improve their current housing if they know they'll get the program in a year. Then the measured comparison between treatment and control is not the true difference.

  11. i:

    Bannerjee and Duflo write, "the easiest way to present it to the community is to say that an expansion of the program is planned for the control areas in the future (especially when such is indeed the case . . ."

    This would seem to imply that sometimes the expansion is not the case, no?

  12. Andrew:

    I see the confusion. No, what they mean is it is easier politically to get the community and the implementing program to buy into the randomized evaluation if they can market it as a phased-in approach. This avoids resentment among the control group. So the expansion will happen no matter what, but the researchers prefer not to tell the control group yet, while the program prefers to avoid community tension by telling them they just have to wait a bit.

    Sometimes whether the expansion will happen depends on the evaluation results (was the program effective?). In this case we either a) tell the community we are trying out a new program and depending on the results it MAY be expanded; or b) don't say anything (again, better for research).

    But we never claim a program is coming when it isn't.

  13. I amend my statement. You understand the quote perfectly well. I think it's sloppy writing. I know Banerjee and Duflo don't use this deception and I haven't heard of other researchers doing it either. I can't imagine why anyone would do it. It would create biased results and a pissed-off community for very little short-term gain.

Comments are closed.