Quote of the day: statisticians and defaults

On statisticians and statistical software:

Statisticians are particularly sensitive to default settings, which makes sense considering that statistics is, in many ways, a science based on defaults. What is a “statistical method” if not a recommended default analysis, backed up by some combination of theory and experience?

9 thoughts on “Quote of the day: statisticians and defaults

  1. Gelman, you've posted without comment or attribution? What do you think this is? The real world????!!!??? Interpret for me!!!

  2. There are certainly similarities, but there are differences too. You have to actually choose a method (even if you have a default for a particular problem type) but once you choose that method, you use the defaults by … err default .. unless you choose otherwise.

    If I enter a bunch of data into R, there's no default analysis. But if I start off with glm() there are many defaults.

  3. I think there is a good deal of truth in this. Consider the 'default' level of significance in classical tests, or Jeffreys' recommendations for Bayes factors.

    This reminds me of a thought I had a while ago about statistical computing. There is a concept in computer science called 'convention over configuration'. This simply stipulates that where there are options to computer programs, there should also be suitable defaults. This seems to have worked for programs like SAS and certain parts of R (lm, glm, etc.). Conversely, this may be a barrier for new users of programs like BUGS and JAGS.

  4. I agree that there is some truth in this. On the other hand it misses what I'm most interested in in statistics – which is a critical approach to your own model (including defaults). Andrew's work is full of defaults (I kind of prefer Radford Neal's term 'quick hacks' …), but these are followed up by lots of checking and further exploration. Too often the defaults turn into rules which turn into 'laws'.

Comments are closed.