Dennis Lindley’s and Christian Robert’s reviews of The Black Swan, also an aside about the most important philosophical point of confusion about Bayesian inference

Here’s Lindley. I suspect I’d agree with Lindley on just about any issue of statistical theory and practice. I’ve read some of Lindley’s old articles and contributions to discussions and, even when he seemed like something of an extremist at the time, in retrospect he always seems to be correct. That said, I disagree with him on Taleb. I think the difference is that Lindley was evaluating The Black Swan based on its statistical content, whereas I liked the book because it was full of ideas and stories that sparked thoughts in my mind (and, I think, in the minds of many readers).

Also, I disagree with Lindley 100% about Karl Popper. Even though, again, I think Lindley and I are extremely close on issues of statistical practice and theory.

And here’s Robert. I like his connection of “black swans” to “model shift.” This fits in well to my three stages of Bayesian Data Analysis (model building, model fitting, model checking), with model checking being the all-important but often neglected ugly sister. (As I’ve discussed many times, you rarely see graphical model checks in a published paper, because either (a) the model didn’t fit, in which case, at worst you’d be too embarrassed to admit it, or at best you’d fix the model and there’d be nothing to report, of (b) the model fits ok, in which case the model check is probably only worth a sentence or two.)

From a philosophical point of view, I think the most important point of confusion about Bayesian inference is the idea that it’s about computing the probability that a model is true. In all the areas I’ve ever worked on, the model is never true. But what you can do is find out that certain important aspects of the data are highly unlikely to be captured by the fitted model, which can facilitate a “model shift” moment. This sort of falsification is why I believe Popper’s philosophy of science to be a good fit to Bayesian data analysis.

Also, I agree with Christian’s characterization of Black-Scholes etc. as not “n accurate representation of reality, but rather a gentleman’s agreement between traders that served to agree on prices.” The way I put it was that these graduate programs in “financial mathematics / financial engineering” served a useful function by screening for students who were mathematically able and willing to work hard. It’s too bad they couldn’t have been learning statistics instead, but, for better or worse, competence in statistics is easier to fake than competence in math.

Christian also has an interesting conclusion:

Encouraging a total mistrust of anything scientific or academic is not helping in solving issues, but most surely pushes people in the arms of charlatans with ready answers.

I wonder what Taleb would say about this. Possibly he’d reply that it’s better to have citizens to think critically than to be awed by their financial advisors.

13 thoughts on “Dennis Lindley’s and Christian Robert’s reviews of The Black Swan, also an aside about the most important philosophical point of confusion about Bayesian inference

  1. what's up with the idea that popper became popular because of right-wing politics? can someone clarify this? is there something that everyone knows that i'm missing? (e.g., like the backstory of how f. a. hayek got the nobel)

  2. Whatever one's feelings about Popper, I think that it should be possible to appreciate the excellent work of Dennis Lindley even if you don't support public ownership of the means of production or even social security and a graduated income tax.

    And, regarding Bayes and politics, recall that <a href="http://www.stat.columbia.edu/~cook/MT/mt-search.cgi?search=bayes+china+xiao-li&IncludeBlogs=1&limit=20">at least some Chinese Communists opposed Bayesian statistics because they felt that prior distributions were inherently counter-revolutionary. In the U.S., though, anti-Bayesians seem to be conservative–intellectually, if not politically–in that they are favoring classical methods that they were taught in school. And, nowadays, Bayesian methods are so popular that being an anti-Bayesian is kind of cool, in a cigar-smoking, politically incorrect, Dartmouth Review kind of way.

  3. Lindley's comments about Popper certainly struck me as over the top, and I'm don't see how the political dimension is relevant to a discussion of methodology. That said, I wonder if the distinction he may have been trying to draw was between Bayesianism and the simple version of falsifiability, in that a probability statement (excluding of course p=0 and p=1) can never be falsified in the strict sense.

    If I read you correctly, your take on model selection is that we can reject a poor model on the grounds that it has a very low posterior probability (perhaps compared with some other model), but should not interpret a high posterior probability as positive evidence for the truth of a model, given that, in the social sciences at least, our models are almost certainly wrong. Please correct me if I'm wrong on this point, but I take it this is the sense in which you argue Bayesianism accords roughly with Popperian philosophy of science.

    I'm completely sympathetic to your view that we should expect every model we consider to be misspecified, and should not think of model selection as a search for the "true" model. But it seems to me that there is still a sense in which Bayesianism is somewhat different from even a broadly understood version of Popperian falsification. While high posterior probabilities may not constitute evidence for the truth of a model, don't they still provide positive evidence, indicating that the model captures important aspects of the data? I suppose my point here is that, rejecting the notion that posteriors quantify the probability of truth, we are essentially left with a ranking of the quality of several theories, in terms of their ability to describe our observations. One could just as well describe this as "negative evidence" against the poor models as "positive evidence" for the good ones: it's an inherently relative notion. My broad reading of Popper, on the other hand, is that there is no such thing as positive evidence for a proposition, stated in terms of probabilities or otherwise, only negative evidence against it.

    I'm incredibly interested to hear your thoughts on this. I actually sat down at my computer this evening thinking "wouldn't it be interesting if Andrew Gelman said something about model selection and philosophy of science" and there it was. You've made my lame grad student weekend!

  4. A minor comment on Black-Scholes – I took a three semester sequence of graduate finance classes at Chicago in the late 1970s (Fama, Fama, Ingersoll) and we knew back then Black-Scholes wasn't correct, as Fama had already shown years before that the Gaussian distribution was too thin-tailed relative to actual stock price movements. However, it's a bias-variance tradeoff kind of thing; it gave biased answers, but they weren't biased much, and they allowed everybody to agree on the answer give or take a little insider information, thus greatly reducing the variance of people's opinions as to what the right answer was – and also the variance of the individual's error. Not so much what C.R. states – a gentlemans' agreement – because why wouldn't someone use their own better-quality beliefs to pump money away from all those who adhered to the agreement? – as a real benefit to all concerned by getting better, albeit not perfect, valuations.

    One problem I have with Taleb is that the book seems like a prescription for paralysis by fear of the unknown. We can't analyze it, we can't quantify it, we can't say anything about it except that it might be BIG and BAD, but somehow we still have to act in the face of it. I wonder if he keeps all his money in the form of canned food buried in his back yard.

  5. While a good statistician will carefully consider a model in light of its assumptions and limitations, the consumers of these models often take them on faith. I agree that Black-Scholes etc. are useful as "gentleman's agreements", however is a large amount of money are changing hands based on these agreements. If the limiations of these models are not understood (or worse, if they are thought not to have limtiations at all), then this is a serious problem.

  6. John, Taleb goes long on some deep out of the money options and waits until something extreme happens. As the BS formula assumes normality of returns (i.e. thin tails), he buys these options relatively cheap. This way he hedges the possibility of loosing anything more that a low premium and he has unlimited upward potential.

  7. I agree with Lindley and Roberts that the tone of the book is too harsh — however I think they are missing the forest by focusing on the trees. Taleb's main criticism to me seems to me to be that probability theory/statistics isn't robust enough to engineer a framework safe enough to LEVERAGE yourself 30 TIMEs over. Neither did any mainstream statistician ever seem to really make this point loud and clear. Lindley's example on black/white swans is foolish and not something that can be really used in practice.

    To take an example for artificial intelligence and CS — yes you could theoretically use logic to build an AI — of course it just hasn't happened. This to me suggests first-order logic and its derivatives are an insufficient framework to create an AI. Similiarly, there doesn't seem to be a "safe" framework within probability and statistics to leverage yourself 30 times in a social science type environment.

  8. Where did Jeffreys comment on Popper and falsification? I've always been somewhat confused by the whole notion of falsification, e.g. How do you falsify a "true" hypothesis/model, and how do you falsify (in a strict sense) a probabilistic model?

  9. Standardized contracts for equity options have been trading on exchanges for decades. The idea that option traders are using Black-Scholes formulas as a kind of gentlmen's agreement is … quaint. I have not observed many gentlemen in that game; everyone is in it to win, and you don't win unless your prices are better than the market's. If you don't have better prices than the market, transaction costs and negative selection will quickly put you out of business.

    Anyway, Black-Scholes doesn't give you "a" price. Depending on what you think the future variance of prices changes will be, you can get any price you want, more or less.

    If you look at the distributions implied by option prices, you see the market knows that returns are not gaussian and that Black-Scholes is not correct.

    It's important to have a (data determined) model for this non-normality. But it's more important to realize that corrections to an option's price from a better model of the tails of the distribution are dwarfed by the large uncertainty in any forecast of the future variance of the returns. A slight improvement in your forecast of variance will yield a more accurate price than having a better model for the tails. (In other words, the uncertainty in the values of the statistical parameters of your model are a more serious problem in practice than having a somewhat incorrectly specified model).

    Option traders who trade with their own money know all this (there is a strong selection effect on this set of people) and act (set prices, control risks, etc.) accordingly. The problem with trading at the large banks is primarily an agency problem. People were trading using other people's money and had huge incentives to take big risks and engage in other financial mumbo-jumbo. I'm sure many of them knew that the massively leveraged, one way bets they were taking were incredibly risky – but their pay was maximized by taking these bets and marking prices their way etc. (In fairness, large banks are also full of idiots, especially in upper management, who probably had no idea of the insane risks their institutions were taking).

    I have not read Taleb's book, but from other things I have read by Taleb, I'm sure I would agree with Lindley.

  10. Frank, Bill: No, your statement that "your take on model selection is that we can reject a poor model on the grounds that it has a very low posterior probability (perhaps compared with some other model) . . ." does not represent my position. I view computations of a model's posterior probability as close to meaningless. I do agree with Popper that a model can be rejected, but I do so not based on its assessed posterior probability of being true but rather based on it making predictions which differ in important ways from data.

    John, Jared, JDM: In case this wasn't already clear, let me emphasize that I know almost nothing about what goes on with financial traders. What I meant by "gentleman's agreement" wasn't that these people avoid chances to arbitrage, but rather that these pricing schemes must represent a convention. I completely agree with JDM about the "other people's money" problem; this sort of thing is a great example to use in decision analysis classes.

  11. Dimitris –

    Taleb's "plan" is based on options traders all being much stupider than he is, not a winning strategy I think. Fama's original papers about the non-normality of returns (he settled on symmetric stable distributions of the form exp(|r|^a), a ~ 1.7 IIRC) which don't have a finite variance were published back in the 1960s – and were known by options traders back in the late 1970s, and probably much earlier.

    Whether or not Fama's early work is exactly correct is not so important as Taleb's ignorance of well-known 40+-year-old results based on analyzing actual data from a perennial shortlist candidate for the Nobel prize in Economics.

    His prescriptions for dealing with the unknown seem to be largely in absentia, and this is one of several things, that annoyed me about the book; his strategy is based on dealing with people he presumes stupider than he is. It is not a strategy for dealing with the unknown itself, as probability and statistics are in some sense, but merely one of taking advantage of people who tend too much towards certainty about the benefits of their own strategies. Professional gamblers have been doing that for 20,000 years, I expect.

  12. I don't have experience in the application of Black-Scholes specifically. However, I have seen a number of cases of managers getting starry eyed and turning off their critical thinking skills when it comes to complex math/stats models. The more complex the models the smaller the group of people who actually understand the models limitations. In interviews I have seen of Taleb he doesn't come across as anti science at all but much more, "it's better to have citizens think critically than be awed by their financial advisors." Or for the bankers, if you are leveraged 30 to 1, it better not have anything to do with faith in your math/stat modelers.

Comments are closed.