Distinguishing association from causation

I was pointed to Distinguishing Association from Causation:A Background for Journalists (there is also a PDF version). Here is my summary of their executive summary:

  • Scientific studies that show an association between a factor and a health effect do not necessarily imply that the factor causes the health effect.
  • Randomized trials are studies in which human volunteers are randomly assigned to receive either the agent being studied or an inactive placebo, usually under double-blind conditions.
  • The findings of animal experiments may not be directly applicable to the human situation because of genetic, anatomic, and physiologic differences between species and/or because of the use of unrealistically high doses.
  • In vitro experiments are useful for defining and isolating biologic mechanisms but are not directly applicable to humans.
  • The findings from observational epidemiologic studies are directly applicable to humans, but the associations detected in such studies are not necessarily causal.
  • Useful, time-tested criteria for determining whether an association is causal include:
    • Temporality. For an association to be causal, the cause must precede the effect.
    • Strength. Scientists can be more confident in the causality of strong associations than weak ones.
    • Dose-response. Responses that increase in frequency as exposure increases are more convincingly supportive of causality than those that do not show this pattern.
    • Consistency. Relationships that are repeatedly observed by different investigators, in different places, circumstances, and times, are more likely to be causal.
    • Biological plausbility. Associations that are consistent with the scientific understanding of the biology of the disease or health effect under investigation are more likely to be causal.
  • Studies that include appropriate statistical analysis and that have been published in peer-reviewed journals carry greater weight than those that lack statistical analysis and/or have been announced in other ways.
  • Claims of causation should never be made lightly.

But all this isn’t about causation vs association, it’s about better studies or worse studies. Association and causation are not binary categories. Instead, there is a continuum from simple models on observational data (correlation between two variables), through more sophisticated models on observational data that include covariates (regression, structural equation models), through yet sophisticated models on observational data that take sample selection bias into consideration (Rubin’s propensity score approach), to often simple models on controlled data (randomized experiments). But the mysterious causal “truth” is still out there. If one talks to philosophers these days, they’re not even happy with the notion of causality as being powerful enough as a model of reality.

In the past, I’ve often unfairly complained about studies after having read misleading journalistic reports, so this report is a timely one. But the report has been paid for by large pharma corporations, people may wonder if there is bias or some sort of an agenda in this report.

My quick impression is that they’re promoting the best practices in statistical methodology, that all these companies are subscribing to. But there could be greater use of cheaper observational studies with better modeling (such as employing the propensity score approach, or even just better regression modeling) compared to expensive randomized experiments, and society might be better off as a result. Moreover, there is the issue of statistical versus practical significance. What do you think?