The Irrelevance of “Probability”?

Seth forwarded me this article [link fixed, I hope] from Nassim Taleb:

I [Taleb] spent a long time believing in the centrality of probability in life and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties, and null for total implausibility. Critical thinking, knowledge, beliefs, everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making . It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a standalone way in real-life decision-making. It has caused harm in many fields. . . .

We can easily see that when it comes to small odds, decision making no longer depends on the probability alone. It is the pair probability times payoff (or a series of payoffs), the expectation, that matters. . . .

What causes severe mistakes is that, outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5% probability of an earthquake of magnitude 3 or higher, a 2% probability of one of 4 or higher, etc. The same with wars: you have a risk of different levels of damage, each with a different probability. “What is the probability of war?” is a meaningless question for risk assessment. . . .

The point is mathematically simple but does not register easily. I’ve enjoyed giving math students the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviations is ~16%. What are the odds of exceeding it under a distribution of fatter tails (with same mean and variance)? The right answer: lower, not higher — the number of deviations drops, but the few that take place matter more. It was entertaining to see that most of the graduate students get it wrong. . . .

Another complication is that just as probability and payoff are inseparable, so one cannot extract another complicated component, utility, from the decision-making equation. . . .

I’d just like to add two points. First, utility doesn’t exist either…

18 thoughts on “The Irrelevance of “Probability”?

  1. The classic case is the current financial risk problems. It is easy to estimate the probability of a loan default for last year, but that is fairly useless as it's already happened, and as a guide to the future it is not useful as the economic fundamentals tend to change in fairly unpredictable ways. The financial worlds method of dealing with this has been the trading of debt, but if the risk can't be evaluated then a price can't be determined in any rational way. Solution is to reduce risk substantially by making loans only to well qualified applicants.

  2. Wrong link to The Edge's Taleb article, use
    this one.My comment on this is that hairsplitting about any kind of metric (probabilistic or otherwise) tends to blur the reasons and basis for having built the metric in the first place.

  3. I've read a book and a half by Talib. My rule of thumb is that when someone has to tell me that he is a genius, odds are extremely good that he is not.

  4. Isn't he just saying that instead of a univariate probability we need to account for more factors in our decision making models (and in our brains)? He seems baffled by the need to account for more than one factor in decision making? huh? Did I get this wrong?

  5. I knew before that the Gaussian distribution was the maximum entropy distribution for a fixed mean and variance, but only in a book-learning, declarative memory kind of way. Taleb's spot quiz helped me see the reason why it's true.

  6. This strikes me as the same thing you would struggle with in any mathematical modeling argument — for all but the simplest applications, models are wrong and can be made better by adding something else.

    But they are useful, and we make progress with them.

  7. For a Bayesian, the probability of an event is a random variable. There might be an expectation, but the real value of the probability has a distribution. So?

  8. Those quoted statements seem so misguided it's hard to know where to even start.

    The one about the fatter tailed distribution is especially misleading. For example, the unit gaussian vs the t5 distribution (T with 5 degrees of freedom). The t5 has a higher variance (5/3), so the act of rescaling the variance of course crunches the data down towards 0.

    He seems to be railing against the use of probability distributions with zero variance. In other words, the fair coin with p = 1/2 exactly or the model of an earthquake as a single event of a certain magnitude with a certain probability per year of occurance…

    But probabilistic thinking has been primarily about continuous distributions of events for hundreds of years…

    As if the expected value of damage weren't a probabilistic concept???

  9. Most of the commentators seem to have missed the point. The problem with decision theory is the reliance on distributions, especially the tails. Trying to estimate these from data is pretty much impossible, unless you've seen the 1 in 100 year event it is a mystery how bad it is. The financial stats people seem to have decided that everything will follow a nice distribution irrespective of the fact that they have never seen the tails.

  10. How to model and quantify events – how to construct plausible models and informative priors – is the real problem, not the use of probability. I was going to say the problem was "misuse" of probability – but rather it's the use of naive models and too uninformative priors (compared to our accumulated 'wisdom').

    If you fit a model which couldn't have predicted a rare event that you think is not really that improbable, then your model and priors are inappropriate.

  11. A comment on Taleb's 'quiz'. His question was more about narrow shoulders than it was about fat tails, since if we look at more extreme tail probabilities (than those within a few sds away from mean) then the "fatter" tail probabilities will be of course greater than those of the unit Gaussian. So maybe the grad student intuition about fat tails wasn't still too bad…

  12. My job requires me to cope with NIMBYs who think that since there's uncertainty, we can't make decisions or plan. I try to explain that just because we're ignorant of the future, we don't have to be stupid.

    Apparently I'm wrong.

    I used to try to help developers & home owners by using my subjective probabilities of success or failure to inform their decisions. After reading Taleb's quote, I take it I should just throw up my hands and say "who knows?! Check your horoscope, it'll be just as informative, I'm sure."

  13. Probability is trying to explain an equilibrium, but the true physical world is not. At least for large scale event, it is far from equilibrium, e.g. the universe is expanding. There is no way to accurately predict, nor make a wise decision to guarantee human being's survive. This is not only the failure of probability theory, but also of any science.

Comments are closed.