Case-mix adjustment in non-randomised observational evaluations: the constant risk fallacy

Mohammed Mohammed points me to this article by John Nichols, which begins:

Observational studies comparing groups or populations to evaluate services or interventions usually require case-mix adjustment to account for imbalances between the groups being compared. Simulation studies have, however, shown that case-mix adjustment can make any bias worse. One reason this can happen is if the risk factors used in the adjustment are related to the risk in different ways in the groups or populations being compared, and ignoring this commits the ‘‘constant risk fallacy’’. Case-mix adjustment is particularly prone to this problem when the adjustment uses factors that are proxies for the real risk factors. Interactions between risk factors and groups should always be examined before case-mix adjustment in observational studies.

This is interesting, and it connects to my struggles with survey weighting. Survey weighting is similar to adjustment of control and treatment groups in an observational study. (The survey analogy is respondents and nonrespondents.) Nichols’s article points out difficulties with adjustment if you ignore interactions, which is a problem we’ve found in survey adjustment as well. The solution is to include all interactions that are potentially important, but then a model becomes large, and we have to go beyond least squares and exchangeable models . . .

We discuss in chapters 9 and 10 of ARM the general problem of adjusting for differences between treatment and control groups, but we don’t specifically focus on the importance of interactions.