Moral hazards in impact factors

Carrie links to a Wall Street Journal article about scientific journals that encourage authors to refer to other articles from the same journal:

John B. West has had his share of requests, suggestions and demands from the scientific journals where he submits his research papers, but this one stopped him cold. . . After he submitted a paper on the design of the human lung to the American Journal of Respiratory and Critical Care Medicine, an editor emailed him that the paper was basically fine. There was just one thing: Dr. West should cite more studies that had appeared in the respiratory journal. . . . “I was appalled,” says Dr. West of the request. “This was a clear abuse of the system because they were trying to rig their impact factor.” . . .

The result, says Martin Frank, executive director of the American Physiological Society, which publishes 14 journals, is that “we have become whores to the impact factor.” He adds that his society doesn’t engage in these practices. . . .

From my discussions with Aleks and others, I have the impression that impact factors are taken more seriously in Europe than in the U.S. They also depend on the field. The Wall Street Journal article says that impact factors “less than 2 are considered low.” In statistics, though, an impact factor of 2 would be great (JASA and JRSS are between 1 and 2, Biometrics and Biometrika are around 1). Among the top stat journals are Statistics in Medicine (1.4) and Statistical Methods in Medical Research (1.9), which are considered OK but not top stat journals. You gotta reach those doctors (or the computer scientists and physicists; they cite each other a lot).

4 thoughts on “Moral hazards in impact factors

  1. From working closely with a medical journal, I can tell you that the impact is far more important there.

    Because pharmaceutical companies buy ads in medical journals (great way to target your audience), Impact Factors help set ad rates. And these ads are huge sources of revenue for the professional societies that run the journals.

    Therefore inflating Impact Factor by a point can be worth hundreds of thousands of dollars for a professional society each year!

    That said I've never seen the journal I work with do anything shifty, but Impact Factor is kept in mind when making decisions (e.g. "This paper might not be ideal for our readership but it'll get cited a lot and improve our IF." We've certainly never micromanaged references!)

  2. This reminds me of Goodhart's law (kudos to Jason's blog): whatever metric one chooses to judge, people will manipulate or overfit it, and the metric will eventually lose its predictive power.

    For example, college students will cram with the aid of smart drugs as to max out their GPA (at the expense of actually learning and understanding the knowledge they would need), or will take coaching to max out the SAT scores (instead of letting it measure inherent non-overfitted ability). Creative accounting will be done by corporations to have good-looking balance sheets for the investors (instead of actually pursuing long-term returns). Academics will maximize their paper count instead of actually striving towards true breakthroughs and true impact. Web sites will cite each other in cycles as to maximize their page rank on Google instead of using links as truthful citations.

    Our society is overreliant on metrics. And sadly those who don't play the metrics end up being left behind. Up to a point, playing is a good competitive sport, but at a certain point, responsibility and morals should get the weight.

  3. I've had experience of being asked by an editor to add more references to their own journal: I wasn't the first author, but I think the request was quietly ignored. I suspect that this behaviour is frowned upon by most academics.

    Hanna Kokko and Bill Sutherland wrote an article about impact factors a few years ago, including a look at the effects of having different fields. I think there's still a lot more to be mined from their study, but I haven't thought too deeply about it. I will if it looks like I could write a grant application.

    Bob

  4. Goodhart's law actually comes from macroeconomics. It was a generalisation of the observation that targeting the money supply to control the economy just gave people an incentive to change what they used as money. so monetarism as a practical strategy was not possible, whatever its theoretic virtues.

    Goodhart's law had a lot to do with the failures of macroeconomic policy in the late 70s and 80s – that is, many millions of people suffered because of it. It's not just an academic's plaything.

Comments are closed.