Random restriction as an alternative to random assignment?

Robin Hanson writes,

To make sense of social complexity we would ideally want to add lots of randomization to people’s real choices, and then collect lots of data on what happens to them. But this seems a lot to ask of people. For example, people who eat at a restaurant might be willing to tell you how they felt later after eating there, but they’d be reluctant to eat a random item from the menu even one percent of the time.

Would people be more willing to have a few of their options randomly excluded? For example, would people mind much if on a menu of one hundred items one of the items was randomly excluded each time – “sorry we are out of that today”? Data about choices under such reduced menus would still have a key randomization component.

This idea occurred to me while talking to a cancer doctor who thought he could get thousands of cancer patients to agree to release data on their progress, but who would be more reluctant to accept a random treatment. Once standard drugs have failed, there are about twenty alternative drugs a patient could try, which they usually pick based on the side effects etc. Patients probably wouldn’t mind much having one of these options taken off the menu.

My thoughts:

I think I’d eat a random item 1% of the time as part of an experiment–after all, 1% of the time would correspond to three lunches per year.

To get to your main proposal: I think if you exclude one item, you’ll get a study that is a mix of experiment and observational study, which could probably be analyzed in a way more robustly than purely observational data could be analyzed, but requiring more information than the analysis of a pure experiment.

This sounds like something that marketing researchers might have studied too.

P.S. See here for much more from the marketing researchers.

2 thoughts on “Random restriction as an alternative to random assignment?

  1. Well by comparing groups who are and are not offered X, you would literally and directly see the effect of letting people have X as an option. And in some ways this is more directly policy relevant than seeing the direct effects of X. If the FDA approves X, the direct result is that it becomes a new option, and only indirectly does X actually get used.

  2. Robin, your followup comment reminds me of the distinction made by medical and psychotherapy researchers between efficacy and effectiveness. Efficacy trials are those that take place under near-ideal conditions (random assignment, strictly following a protocol, highly motivated or well-paid subjects, etc), with a goal of testing whether X causes Y. Effectiveness trials take place under more realistic conditions (which can include allowing patients to self-select into or out of treatment), with the goal of finding out whether a treatment will make a practical difference in the "real world."
    To give an example, one way of treating phobias is with something called "exposure therapy," which involves having patients spend time interacting with whatever they are afraid of (in very controlled circumstances and under guidance of a clinician who helps them build up their coping skills). Exposure therapy shows very good and long-lasting results in efficacy trials. But in practice, people with phobias can be reluctant to try it (perhaps they hear the name and think they're just going to be tossed into a room full of tarantulas). As a result, its real-world effectiveness is somewhat compromised.

Comments are closed.