If statistics is so significantly great, why don’t statisticians use statistics?

I’ve recently decided that statistics lies at the intersection of measurement, variation, and comparison. (I need to use some cool Venn-diagram-drawing software to show this.) I’ll argue this one another time–my claim is that, to be “statistics,” you need all three of these elements, no two will suffice-.

My point here, though, is that as statisticians, we teach all of these three things and talk about how important they are (and often criticize/mock others for selection bias and other problems that arise from not recognizing the difficulties of good measurement, attention to variation, and focused comparisons), but in our own lives (in deciding how to teach and do research, administration, and service–not to mention our personal lives), we think about these issues almost not at all. In our classes, we almost never use standardized tests, let alone the sort of before-after measurements we recommend to others. We do not evaluate our plans systematically nor do we typically even record what we’re doing. We draw all sorts of conclusions based on sample sizes of 1 or 2. And so forth.

We say it, and we believe it, but we don’t live it. So maybe we don’t believe it. So maybe we shouldn’t say it? Now I’m working from a sample size of 0. Go figure.

6 thoughts on “If statistics is so significantly great, why don’t statisticians use statistics?

  1. Your argument reminded me of a keynote talk by James Steiger on Psychometric Society about how we (our brains) make decisions using Bayesian models, and that is why we can decide the restaurant is bad with only 1 observation, instead of keep trying until we get enough power to reject the null.

  2. Perhaps like not representing oneself in court (he who represents himself has a fool for a cliaent) what to be unsure about is poorly self-assessed and requires a detached "consultant".

    And that consultant's "prayer" maybe should be "Enable others to realize what they should be unsure about and then help them be less unsure about just how unsure they really should be about that"

    with this footnote

    Statisticians should try to help others avoid being seriously mislead by observations, primarily by assisting them to get better assessments of confounding, uncertainty and variability in the observations. One of the best ways to do this is to design ways to obtain potentially less misleading observations such as by making comparisons between randomized groups.

    The next best would be to show just how close (not over or understating just how close) other kinds of observations can be used suggest what the observations would have been like with randomization.

    Observations always mislead – anything learnt from them such as an estimated effect is always wrong.

    Statisticians evade this certainty of error by restricting what is (claimed to be) learnt to an interval rather than an estimate – i.e. the effect is not a three fold increase but instead somewhere between two fold and four fold increase. By relaxing the claim the certainty or unavoidability of an error (with randomization!) can be replaced with an ascertained maximum frequency of errors – i.e. in 19 out of 20 studies, the interval “learned” will not leave out the "conceptually randomly drawn" true effect.

    K?

  3. This reminds me of a debate in the (psychological) decision-making literature between Gigerenzer and Kahneman & Tversky. Gigerenzer's argument as I understand it (this isn't my central area so I'm probably not doing it justice) is that K&T have described "biases" relative to normative models that assume unlimited time and capacity to make decisions. In the real world, the same heuristics that lead to apparent biases under artificial lab situations actually serve us better than trying to be completely rational. Part of the reason is that heuristics are "fast and frugal," and in the real world part of what constitutes a "good" decision is being able to make it quickly and without interfering with other important tasks.
    So the extension would be that if you tried to use optimal, statistically correct, evidence-based techniques in your teaching or other areas of your life, you'd spend all your time collecting data and fitting models to figure out how to teach well, and you wouldn't have much time left over to do all the labor-intensive things that a good teacher does.
    I'm not sure that applies to everything you've listed, though. Some of the cost-benefit ratio shifts when you scale up. It may not be worth it to try to craft a standardized test for an advanced seminar that only will ever teach, and just occasionally to a handful of students; but if everybody in a field got together to create a standardized test for undergraduate Intro to X, it might be worth it…

  4. Whereas I routinely criticize statisticians for being bad at praise. They can say what's wrong with something but not what's right. I would say that in the talk of statisticians they do badly (critiques that omit what's good are unbalanced) but in their actions they are forced to be realistic and appropriately balance costs and benefits.

  5. Sanjay – nice point, survival of the fit not the fittest or best.

    Reminds me of the advice, try not to be a pioneer rather than an early adopter in anything but your own research.

    K?

  6. Maybe Seth – but statistical careers lie at the intersection of being a mathematician, being a research facilitator, a research claims arbitrator, (increasingly) a computer programmer, a technical writer, an empirical research educator?, etc.

    This is an uneasy mix and my sense is that for many – so much weight needs to be put on the first – that casts [neat typo] and benefits are seldom well balanced.

    My guess is that you and I interact more often with a subset of statisticians that are more balanced in this sense.

    Thanks for this point – "critiques that omit what's good are unbalanced"
    – perhaps a bit related Andrew Carnegie’s quip – "I don't pay you lawyers to tell what I can't do – I pay you to tell how to get what I want done!"

    K!

Comments are closed.