Weakly informative priors and imprecise probabilities

Giorgio Corani writes:

Your work on weakly informative priors is close to some research I [Corani] did (together with Prof. Zaffalon) in the last years using the so-called imprecise probabilities. The idea is to work with a set of priors (containing even very different priors); to update them via Bayes’ rule and then compute a set of posteriors.

The set of priors is convex and the priors are Dirichlet (thus, conjugate to the likelihood); this allows to compute the set of posteriors exactly and efficiently.

I [Corani] have used this approach for classification, extending naive Bayes and TAN to imprecise probabilities. Classifiers based on imprecise probabilities return more classes when they find that the most probable class is prior-dependent, i.e., if picking different priors in the convex set leads to identify different classes as the most probable one. Instead of returning a single (unreliable) prior-dependent class, credal classifiers in this case preserve reliability by issuing a set-valued classification. By experiments, we have consistently found that Bayesian classifiers are unreliable on the instances which are classified in an indeterminate way (but reliably) by our classifiers.

This looks potentially interesting. It’s not an approach I’ve ever thought about much (nor do I really have the time to think about it now, unfortunately), but I thought I’d post the link to some papers for those of you who might be interested. Imprecise priors have been proposed as a method for encoding weak prior information, so perhaps there is something important there.