More on Bayesian deduction/induction

Kevin Bryan wrote:

I read your new article on deduction/induction under Bayes. There are a couple interesting papers from economic decision theory which are related that you might find interesting.

Samuelson et al have a (very) recent paper about what happens when you have some Bayesian and some non-Bayesian hypotheses. (I mentioned this one on my blog earlier this year.) Essentially, the Bayesian hypotheses are forced to “make predictions” in every future period (“if the unemployment rate is x%, the president is reelected with pr=x), whereas other forms of reasoning (say, analogies: “If the unemployment rate is above 10%, the president will not be reelected”). Imagine you have some prior over, say, the economy and elections, with 99.9% of the hypotheses being Bayesian and the rest being analogies as above. Then 100 years from now, because the analogies are so hard to refute, using deduction will push the proportion of Bayesian hypotheses toward zero.

There is a related line of research among economists (particularly in the program where I am a grad student, MEDS at Northwestern) called “expert testing”; the basic premise is that it is difficult for a principle to know who actually knows something and who is just making something up, but related results deal with the problem of Popper-style science when scientists are self-interested. That is, if scientists are not totally honest (e.g., they don’t report negative results), it turns out nonobvious (and in many cases outright false) that deducting on hypotheses leads to increased knowledge about the “true state of the world”. You might find this line of research interesting.

I don’t know anything about the economics literature in this area, so I appreciate the pointers. Seeing the descriptions above, though, I worry that these writers have the traditional Bayesian attitude that I don’t like–the idea that the goal of Bayesian statistics is to find the posterior probability that a scientific hypothesis is true, or to compare the statistical evidence supporting competing hypotheses. These ideas sound appealing but I don’t think they really work. As Cosma and I discuss in our article (and as we further discuss elsewhere, including chapter 6 of BDA), I think that Bayesian inference works within a model, but not so well for comparing models.

4 thoughts on “More on Bayesian deduction/induction

  1. not specifically what you're after, but it would be a fine place to start if you wanted some background on the econ perspective of decision theory:

    gilboa's text entitled "theory of decision under uncertainty". he gives a lucid discussion of subjectivity and deduction/induction from a philosophy of science perspective (in addition to covering most of the canonical models).

  2. Am I reading you correctly in that the main problem, as you see it, for inductive accounts of Bayesianism as a road to (at least approximate) truth is that no matter what, our models are going to be hopelessly underdetermined?

    By "underdetermination" I'm here primarily thinking of what Bas van Frassen (1980) termed "empirical equivalence"; that is, when two or more theories can be used to derive the same conclusions about some observable phenomena of interest. Because there is no way to (1) adjudicate between such empirically equivalent theories by means of empirics or (2) to know with certainty that there is no such theory waiting around the corner (perhaps one that no one has yet thought of), asking for truth doesn't really make much sense as the best we can ever hope for is empirical adequacy.

    (See the always helpful SEP for references and a writeup: http://plato.stanford.edu/entries/scientific-unde… )

    You seem to be following a similar line on a number of occasions, e.g.:

    "If nothing else, our own experience suggests that however many di fferent speci fications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered (p. 8)."

    "At the risk of boring the reader by repetition, there is just no way we can ever have any hope of making  include all the probability distributions which might be correct, let alone getting p(jy) if we did so, so this is deeply unhelpful advice. The main point where we disagree with many Bayesians is that we do not see Bayesian methods as generally useful for giving the posterior probability that a model is true, or the probability for preferring model A over model B, or whatever (p. 14)."

    "Expanding our prior distribution to embrace all the models which are actually compatible with our prior knowledge would result in a mess we simply could not work with, nor interpret if we could (p. 15)."

    "Conversely, a problem with the inductive philosophy of Bayesian statistics in which science "learns" by updating the probabilities that various competing models are true is that it assumes that the true model (or, at least, the models among which we will choose or average over) is one of the possibilities being considered. This does not fi t our own experiences of learning by finding that a model doesn't fit and needing to expand beyond the existing class of models to fix the problem (p. 23)."

    I'm mostly curious because it seems to me that much of what you deal with in the article isn't really first and foremost about whether Bayesianism is best thought of as a deductive or inductive enterprise, but rather whether Bayesians are warranted in being (epistemically optimistic) scientific realists or whether they should instead be (epistemically pessimistic) empiricists. Needless to say, the latter concern is something which extends far beyond Bayesianism.

    The point, I think, is mostly of note because this is pretty well trodden ground by now with the scientific realist/empiricist divide being one of the main developments of post-1970 philosophy of science (but, as you acknowledge in the paper, this is likely a debate that is completely unknown to the vast majority of social scientists, for better or worse). There are quite sophisticated objections (unrelated to the merits of Bayesianism as such, c.f. the SEP entry on scientific realism) to, among other things, the underdetermination thesis which, if true, would make arguments in the style of the above quotes carry quite a bit less weight than perhaps intended.

    Basically, if you're arguing against actual practicing philosophers of science over matters such as truth, then I think you might have missed your target by about 30 years or so. If, on the other hand, the target is a certain self-understanding of practicing scientists, then it probably fares better (because, again as noted, there often seems to be about a 30 year time-lagg between the philosophy of philosophers and the philosophy of scientists, probably because most scientists learn the bulk of their philosophy in their younger years and then simply can't keep up even if they wanted to).

  3. Oops, think I responded to the wrong post. Was aiming for "Philosophy and the practice of Bayesian statistics."

  4. Mike:

    My impression is that the philosophers of science have a general misunderstanding about what Bayesian statistics is all about, a misunderstanding that is shared by many (as indicated in the passage from Wikipedia that is quoted in our article).

    I'm not trying to argue terminology here. If you want, you could refer to "Bayesian statistics as defined by Wikipedia" and "Bayesian statistics as discussed in my books" as Bayes1 and Bayes2, respectively. The point of our article is that:

    (a) Bayes1 doesn't really do what it says it's doing, and

    (b) Bayes2 does live up to its claims, but it's not "inductive inference" in the way that it is usually believed.

    I think much confusion has arisen because people don't realize the differences between Bayes1 and Bayes2, and I'm pretty sure that philosophers of science, to the extent they think about Bayesian inference, think about Bayes1.

    So I do think we have useful things to communicate with modern philosophers of science.

    That said, I'm sure you're right that we've missed a lot of important discussion and many important references in that literature, and we'll try to improve the literature review when the time comes to revise our article for the journal. Thanks much for the pointers.

Comments are closed.