The statistics and the science

Yesterday I posted a review of a submitted manuscript where I first wrote that I read the paper only shallowly and then followed up with some suggestions on the statistical analysis, recommending that overdispersion be added to a fitted Posson regression and that the table of regression results be supplemented with a graph showing data and fitted lines.

A commenter asked why I wrote such an apparently shallow review, and I realized that some of the implications of my review were not as clear as I’d thought. So let me clarify.

There is a connection between my general reaction and my statistical comments. My statistical advice here is relevant for (at least) two reasons. First, a Poisson regression without overdispersion will give nearly-uninterpretable standard errors, which means that I have no sense if the results are statistically significant as claimed. Second, with a time series plot and regression table, but no graph showing the estimated treatment effect, it is very difficult for me to visualize the magnitude of the estimated effect. Both of these serious statistical problems lead to the problem noted at the beginning of my review, that I “didn’t try to judge whether the conclusions are correct.” It is the authors’ job to correctly determine statistical significance (or use some other measure of uncertainty) and to put their estimates into context. How can I possibly judge correctness if I don’t know whether the results are statistically significant and if I don’t have a sense of how large they are compared to variation in the data? I liked the paper, and that’s why I made my suggestions.

4 thoughts on “The statistics and the science

  1. In that case, it might be more clear to say something like "because of the statistical problems noted below, I could not judge whether the manuscript's conclusions were correct". Like the other commenter, I thought it was just that since there were some mistakes in the paper you decided not to give it much of a reading.

  2. I think that it depends on what the mistakes are as to whether reading the rest of the paper makes sense. In my field we recently had a plethora of papers that misallocated person time (using future information) which almost always let to a bias towards a strong, protective effect. As soon as I see this mistake, I know that the discussion and conclusion aren't worth critiquing as the results need to be redone (discussion of incorrect results isn't very edifying).

    In such cases I might also send a fairly brief review as there isn't much to discuss until we figure out whether this bias changed the interpretation in important ways.

  3. So maybe this was less than a full review of the paper but my earlier comment was partly prompted by another post of yours saying that you typically spend about 15 mins reviewing a paper. I do think reviewing is a serious job if only to save others having to read crappy papers at a later stage. Sure, there are loads of papers submitted each year, but also loads of people writing them — if each paper is reviewed by 3 people and each paper averages 3 or 4 authors, reviewing one paper for every one you submit is probably fair and wouldnt be too taxing for most authors, even at 4 hours reviewing time for each.

  4. David:

    I review many more papers each year than I submit to journals. Rather than spending 4 hours reviewing each, I think I make more of a contribution to society by spending 15 minutes reviewing and the other 3.75 hours blogging (where my comments can reach thousands of people rather than just one).

Comments are closed.