The paradox of importance in statistical research

I read a couple of psychology papers recently and was impressed by their thoroughness. Each of these papers (one by Pelham, Mirenberg, and Jones, and one by Roberts) had 10 separate studies covering different aspects of their claims. The standard in applied statistics seems much lower: what’s expected is that we do one good data analysis, along with explorations of what might have happened had the data been analyzed differently, assessment of the importance of assumptions, and so on.

Standards are lower in applied statistics

The difference, I think, is that in a statistics paper–even an applied statistics paper–the goal is to study or demonstrate a method rather than to make a convincing scientific case about the application under consideration. I mean, the scientific claims being presented should be plausible, but the standards of evidence seem quite a bit lower than in psychology.

What about other fields? Biology and medicine, oddly enough, seem more like statistics than psychology in their “convincingness” standards. In these fields, it seems common for findings to be reversed on later consideration, and typically a research paper will present the result of just one study. (In medicine, it is common to have review articles that summarize 40 or more studies in an area, and it seems accepted that individual studies are not supposed to be convincing in themselves.)

Political science, economics, and sociology seem somewhere in between. Research papers in these fields will sometimes, but not always, include multiple studies, but there is also often a requirement for a theoretical argument of some sort. It’s not enough to show that something happened, you also have to explain how it fits in (or refutes) some theoretical model.

The paradox of importance

Getting back to statistical research, one thing I’ve noticed is that the most elaborate research can be done on relatively unimportant problems. If there is a hurry to solve some important problem, then we’ll use the quickest methods at hand–we don’t have the time to waste on developing fancy methods. But if it’s something that nobody cares about . . . well, then we can put in the effort to really do a good job! In the long run, these new methods we develop can become the quick methods for the important problems of the future, but meanwhile we often see cutting-edge applied statistical research on problems that are of little urgency.

2 thoughts on “The paradox of importance in statistical research

  1. Would it be fair to say that the lack of rigor in medical and biological studies is a by-product of their perceived urgency? I detect a definite bias in life science in favor of speed over completeness. At some level this makes sense, but the number of conclusions that get overturned here is a little alarming. Does this happen at a distinctively higher rate than in other fields?

  2. As a PhD student in clinical psychology with a partner who is a PhD student in cognitive/experimental psychology I think the 'quality'/'thoroughness' of psychology research breaks down into multiple camps. I find that much of the clinical psychology research that gets published is quite slapdash, observational (with the exception of RCTs of course), and very likely to be contradicted or overturned in future studies. This may be, as paulse suggests, because clinical psychologists – like medical researchers – may feel that their 'applied' research is very important and must get out right away to help/save the word ;P (even if the research is just a half-arsed observational study using a convenience sample to run a psychometric analysis on a mediocre clinical questionnaire).
    The experimental literature (i.e., cognitive psychology), perhaps due to its more 'basic science' nature and focus, tends to be much more thorough and hard nosed. The studies will usually be randomized experiments, and will often include (as you mentioned) several experimental investigations of the same question with minor to moderate tweaks so as to elegantly attempt to hone in on the 'true state of nature'. This may be because these types of experiments are 'easier' to run in some ways (college students sitting at computer screens) but also reflects (i think) the greater difficulty of publishing in the experimental psychology realm. Its not uncommon for a cognitive psychology manuscript to be rejected (or accepted pending major revisions) with the demand that the authors run additional experiments. This rarely happens in clinical psychology (and I would guess in the medical sciences too). Sure our papers get rejected (or accepted pending revisions) but generally all that is required is to try a different analyses, add to the discussion, or simply submit the manuscript elsewhere.

    Just my two cents :)

    Cheers,

    Pat

Comments are closed.