Nope

For some reason Aleks pointed me to this description of Bayes’ theorem. Just in case anyone stumbles across this, let me say for the record that this description has very little to do with applied Bayesian statistics as I know it. For example, “The probability of a hypothesis H conditional on a given body of data E . . .” I just about never estimate the probability of a hypothesis. Rather, I set up a model and perform Bayesian inference within the model.

To be fair, this is from an encyclopedia of philosophy, not statistics, but I could imagine people getting confused by this and thinking it has something to do with Bayesian inference in applied statistics. (See here for more on Bayesian data analysis: what it is and what it is not.)

4 thoughts on “Nope

  1. But is was just a description of a thereom afterall

    Thereoms tell us what must be true in our abstractions (i.e. models ) not what is true about the world (all models are false).

    Perhaps similar to the confusion you raised in your slides between quantities of intereset (about the world) and inferential summaries (about our assumed model).

    As I think someone once said "Thereoms are never wrong they just may not apply"

    Or as I think the point you are making here is that "just knowing them does not solve applied problems"

    Keith

  2. I am not sure what distinction you are trying to make. Insofar as parameter values in a particular model can be considered "hypotheses", it seems a fair statement, and there are a lot of people who do parameter estimation. Are you arguing that in applied Bayesian statistics it is more common to make predictive inferences about observables, rather than parameter estimation? That is, you're more interested in probability distributions of observables than of parameters (which, to me, are "hypotheses")?

  3. Keith,

    Yes, the article is fine as it stands. But I wouldn't want people to think that's what Bayesian data analysis is like. The article could use a couple more sentences, something like: "There is also something called applied Bayesian statistics or Bayesian data analysis which commonly applies Bayes' theorem to continuous-parameter models and performs inference conditional on a model rather than attempting to compute the posterior probability that a model is true. This methodology is a generalization of classical statistics that allows partial pooling of information from multiple sources."

    NU:

    No, I'm interested in parameters as well as observables. Parameters are what generalize to new situations. What I was saying is that: (a) I almost always work with continuous-parameter models, and it doesn't make much sense to identify each of the continuously-infinite values of theta with a different "hypothesis", and (b) I do check my models, but by examining their implications and comparing to data and other knowledge, not by attempting to compute the posterior probability that my whole model is true.

  4. Andrew,

    Thanks for the clarification.

    I think this may just be a terminological argument about what a "hypothesis" is.

    I would not feel uncomfortable, for example, calling "m=0.511 MeV" or "m=0.512323 MeV" one of uncountably infinitely many hypotheses about the mass of an electron.

    Your point (b) is similar, I think: you identify "hypothesis" with "model". I agree that you rarely try to calculate the probability of a model in order to check it, but you do try to calculate the probability of parameters conditional on a model (which, again, I call hypotheses).

    It's not clear to me that they were endorsing the estimation of model probabilities for the purpose of model checking, or implying anything about model checking at all.

    I think they are just calling a hypothesis "the thing you're estimating the posterior distribution of", i.e., a hypothesis is something you're making probabilistic inferences about.

Comments are closed.