Using base rate information?

Aleks points to this blog entry from “HedgeFundGuy” on bias in decision making. HedgeFundGuy passes on a report that finds that people’s opinions are strongly biased by their political leanings, then he gives his take on the findings–he thinks that this so-called bias isn’t really a problem, it’s just evidence of reasonable Bayesian thinking.

I’ll first copy out what HedgeFundGuy had to say (including his own copy of the report of the study), then give my take, which is slightly different than his.

HedgeFundGuy writes:

A recent Miami Herarld article on some academic research on bias is quite illuminating. Because it’s a registration required link (ugh) I’ll snip the best parts.

Drew Westen is a professor of psychology at Emory University, and author of a new and still-unpublished study testing whether people make decisions based on bias or fact. Bias won hands down.

In a key scenario, respondents were lead to believe a soldier was accused of torturing people at Abu Ghraib prison in Iraq. The fictional soldier claimed to have been following orders from superiors who told him the Geneva Convention had been suspended. He supposedly wanted to subpoena President Bush and Defense Secretary Donald Rumsfeld to prove his case. Respondents were asked if he should have that right.

Some were presented with strong ”evidence” corroborating the soldier’s story. Others had only his word to go on.

But the strength or weakness of the evidence turned out to be immaterial. Researchers were able to predict people’s opinion over 80 percent of the time based simply on their opinions of the Bush administration, the GOP, the military and human rights groups. Those who had less affection for the president sided with the soldier even when the evidence was weak. And fans of the president tended to side with him even when the evidence was overwhelming.

We believe what we want, facts be damned.

”The scary thing,” says Westen, “is the extent to which you can imagine this influencing jury decisions, boardroom decisions, political decisions . . .”

It sounds like solid research and I believe it. But I’m not so pessimistic on the conclusion. Instead of bias, I would just say we are all Bayesians. We see things through a filter, but that allows us to process information faster and more efficiently. Sure, sometimes our preconceptions are mistaken and unhelpful, but generally we apply preconceptions every day to social and logistical problems big and small.

If you told me that someone I generally find unreliable or mistaken in his worldview (for me, Michael Moore or Ralph Nader), who believed X, you would have to add a lot of data clearly pointing to X in order for me to also believe X. In contrast, If you told me that Milton Friedman or Richard Posner believed Y, I could probably withstand seeing some data suggesting not-Y and still believe Y, based on my faith in Friedman or Posner. In a more pedestrian fashion, when my wife says my shirt doesn’t match, I believe her without checking myself, but if she told me my spark plugs needed changing, I would pretty much ignore her. People and groups have credibility on different issues, and their alignment with certain positions causes me to have greater or lesser belief in those positions irrespective of the data. Those starting points then require more or less corroborating data depending on my initial skepticism.

I remember Fama and French’s influential Journal of Finance 1992 article showing little evidence for the CAPM. It was so persuasive because the authors were and are efficient markets advocates, and CAPM is aligned with the efficient markets camp. Their conclusion against the CAPM suggested the data must have been very weak indeed. If that article was written by an unknown, or some ‘animal spirits’ advocate at Harvard it would not have been nearly as persuasive. That’s bias, but that’s also rational.

My take on it:

I sympathize with HedgeFundGuy’s desire to debunk, or demystify [actually, I first typed this as “demistify” but that makes sense too!], the study. (As Aleks points out, this is the direction of Gigerenzer’s work on interpreting common mental “mistakes” as cognitively efficient behavior.)

However, I’m a little skeptical of HedgeFundGuy’s skepticism. First off, I disagree with HedgeFundGuy’s claim that “we are all Bayesians.” As far as I am aware of the research on judgment and decision making by Kahneman, Tversky, Krantz, . . ., we are not Bayesians–at least, our preferences and decisions do not follow Bayesian rules. (Technically, in many settings people do not use base rate information, and in just about all settings, people fail to account for sample size in a way consistent with likelihood/Bayesian inference.)

Now, maybe that’s OK that we’re not Bayesian–I suspect Gigerenzer isn’t bothered by it–but, given that so many experiments show that people don’t make use of base rate information when they definitely should, I’m wary of suddenly turning around and applauding an experiment that shows that, in some settings, people use base-rate information too much.

There’s base-rate information, and there’s data information (well, that division is artificial, since as HedgeFundGuy points out, we interpret the data information (the “likelihood”) in light of our beliefs about its source), and people weight them different ways in different circumstances. There’s clearly something “psychological” going on, and some of it can perhaps be interpreted as efficient use of scarce mental resources. But interpreting this as Bayesian inference–no way.

To the extent that we all have political opinions, and we would like those who disagree with us to learn a bit from unpleasant facts, I think findings such as reported in this paper are indeed disturbing.

4 thoughts on “Using base rate information?

  1. The fact that Kahneman, Tversky, and Krantz are referred to causes me to pay additional respect to what is written. Those folks made an impressive contribution. Such awareness by the writer implies(but does not guarantee) that his understanding is above average.

    Filtering is a conerstone in cognition.

    Screwdrivers, as tools, get misapplied and abused in innumerable ways precisely because they are justifiably recognized as quite useful in the realm of their APPROPRIATE use. The same goes for lots of tools.

  2. Stated probabilities are numbers read or heard. Experienced probabilities are taken in trial by trial through the senses. Not surprisingly, the two types of equivalent information are psychologically distinct. Visually experiencing chance outcomes enables automatic of probability learning (Christensen-Szalanski and Beach 1982; Hasher and Zacks 1984) and helps people achieve an intuitive grasp of the choices before them. Experiencing frequencies have also been shown to have a profound effect on normative response rates in probability problems. Koehler (1996) provides a literature review citing evidence that trial-by-trial learning of base rate information reduces base rate neglect. As Andrew mentions, Gigerenzer, Hell and Blank (1988), for instance, allowed participants to experience random sampling directly in Kahneman & Tversky’s (1973) classic demonstration of base rate neglect, and saw a large increase in normatively-correct Bayesian responses. (for other examples see Christensen-Szalanski and Bushyhead 1981; Manis et al. 1980).

  3. Dan,

    You write, "Stated probabilities are numbers read or heard. Experienced probabilities are taken in trial by trial through the senses." What about probabilities that are estimated using systematic data collection and statistical modeling? (For example, see chapter 1 of Bayesian Data Analysis, or other examples such as my estimates of the probability that an election is tied.) Is this a third category: probabilities that are experienced and then stated?

  4. Would you like to explain to me what the meaning of this paragraph?

    ''The scary thing,'' says Westen, “is the extent to which you can imagine this influencing jury decisions, boardroom decisions, political decisions . . .''

    Thanks

Comments are closed.