Of psychology research and investment tips

A few days after “Dramatic study shows participants are affected by psychological phenomena from the future,” (see here) the British Psychological Society follows up with “Can psychology help combat pseudoscience?.”

Somehow I’m reminded of that bit of financial advice which says, if you want to save some money, your best investment is to pay off your credit card bills.

11 thoughts on “Of psychology research and investment tips

  1. I have to say, recording the occurrence of experiments is a great idea. Publishing all the data for successful and unsuccessful experiments would be nicer, but maybe this is low enough effort that people would actually do it. “We intend to investigate whether X,Y,Z correlate with W, in population size N.” Especially in cohort studies where you get false positives from the large number of experiments you’re running, you’d have somewhere to double check: I saw significant correlation in A and B, but B has been investigated a dozen times with no results reported, so I’ll focus on A.

  2. Unless many already know the results of their studies before they do them.

    Then registration will be biased!

    Again the Zombies category is just the place for such thoughts…

    K?

  3. can’t help but wonder what this (amazingly clever & good) study should make us think about the debate over the Bern article:

    Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1469652.

    Using scientists with relevant expertise as subjects, the investigator found that a (mock) study of ESP was much more likely to be assessed as methodologically sound when the *results* reported in the study did not support existence of ESP than when they did find support (that, of course, was the experimental manipulation). Maybe we should run the study in reverse (not sure how) to validate?

    Interesting discussion, too, about whether the revealed influence of priors on assessments of the quality of the study is consistent with Bayesianism (sure: Bayesianism tells you to how to update based on likelihood ratio of new information, but doesn’t have anything to say about where the likelihood ratio assigned to that information comes from) & whether this using prior to assess the quality of new information is a normatively sensible strategy for decision under uncertainty (author says yes, even though this is the very essence of “confirmation bias”; I am skeptical–& I decided based on my priors that his formal proofs had all sorts of errors.)

    Oh– one more thing: The investigator also performed the experiment on social scientists engaged in study of “parapsychology”– the gadflies who engage in regular scientific testing of such phenomena and who tend to accept that there is some empirical support for them. Their assessments of the methods of the mock study did *not* vary depending on the results….

  4. Dk:

    Evaluation of the statistical methods (as in Kaiser’s comment above) is important, but it’s not the whole story. All the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero–who am I to say?–just that the comments about the high-quality statistics in the article don’t say much to me.

    I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics.

    To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a large, robust phenomenon, little statistical errors won’t be so damaging as in a study of a fragile, possibly zero effect.

    In some ways, there’s an analogy to the difficulties of using surveys to estimate small proportions, in which case misclassification errors can loom large, as discussed here.

    And, yes, this can all be seen as Bayesian–but the general principles would still be there, in some form, had Bayes and Laplace never been born.

  5. Andrew:
    the subjects in Koehler’s study, as I recall, were asked to assess the internal validity of the design, not (or not merely) the statistical methods. I understand Wagenmakers to be criticizing the Bern study in that way too– that is, he is saying that Bern didn’t test hypotheses about individual differences specified in advance but rather poked around in the data ex post to find significant results, which he then reported (if so, it’s really shocking JPSP would publish; social psychologists certainly *don’t* think that is an appropriate way to conduct empirical testing). But in any case, the methods, statistical or otherwise, were the *same* (hence equally valid or invalid!) in the two conditions of Koehler’s studies– so there shouldn’t have been an effect conditional on the manipulation unless there was confirmation bias.

    Bill:
    try cutting and pasting the link or just typing it in (or just google– it’s definitely on SSRN).

    Everyone: Since we are on the topic of priors influencing scientists’ assessments of quality of methods, there’s also great experimental study that finds scientist reviewers were *less* likely to find methodological problems if a study reported finding making a surprising but gratifying discovery. Evaluations of Research. Psychol Sci 4, 322-325 (1993). But seems contrary to how many people experience the peer review system, no? So I guess Koehler’s study predicts many will find Wilson et al.’s study horribly flawed.

  6. Dk:

    I agree that this perception research is interesting and important.

    What I'm pushing against here is the idea that, if nothing's going on here, that there must be some serious methodological flaw with the study. When you're studying tiny or zero effects, all the methodology in the world won't save you.

Comments are closed.