Non-statistical thinking in the US foreign policy establishment

I’m a few weeks behind in my New Yorker reading and so just recently read this fascinating article by Ryan Lizza on the current administration’s foreign policy. He gives some insights into the transformation Obama from antiwar candidate to a president conducting three wars.

Speaking as a statistician, though, what grabbed my eye was a doctrine of journalist/professor/policymaker Samantha Power. Lizza writes:

In 2002, after graduating from Harvard Law School, she wrote “A Problem from Hell,” which surveyed the grim history of six genocides committed in the twentieth century. Propounding a liberal-interventionist view, Power argued that “mass killing” on the scale of Rwanda or Bosnia must be prevented by other nations, including the United States. She wrote that America and its allies rarely have perfect information about when a regime is about to commit genocide; a President, therefore, must have “a bias toward belief” that massacres are imminent.

From a statistical perspective, this sounds completely wrong! If you want to argue that it’s a good idea to intervene, even if you’re not sure, or if you want to argue that it’s wise to intervene, even if the act of intervention will forestall the evidence for genocide that would be the motivation for intervention, that’s fine. It’s a cost-benefit analysis and it’s best to lay out the costs and benefits as clearly as possible (within the constraints established by military and diplomatic secrecy). But to try to shade the probabilities to get the decision you want . . . that doesn’t seem like a good idea at all!

To be fair, the above quote predates the Iraq WMD fiasco, our most notorious recent example of a “bias toward belief” that influenced policy. Perhaps Power has changed her mind on the virtues of biasing one’s belief.

P.S. Samantha Power has been non-statistical before.

P.P.S. Just in case anyone wants to pull the discussion in a more theoretical direction: No, Power’s (and, for that matter, Cheney’s) “bias toward belief” is not simply a Bayesian prior. My point here is that she’s constructing a belief system (a prior) based not on a model of what’s happening or even on a subjective probability but rather on what she needs to get the outcome she wants. That’s not Bayes. In Bayes, the prior and the utility function are separate.

7 thoughts on “Non-statistical thinking in the US foreign policy establishment

  1. Isn't the “bias toward belief" just a way of saying that insofar as inertia is important in decision-making then the status quo should be closer to intervene than not intervene?

    Insofar as the decision is made from scratch each period the status quo doesn't matter, but if the status quo matters then trying to shift the status quo is worthwhile.

  2. It sounds as if they (the journalist Lizza and policy wonk Power) are describing a form of risk calculation involving: (1) the probability that an event (i.e. genocide) will occur, (2) the magnitude of undesirable outcomes (i.e. deaths) that would happen if the event does occur, (3) the degree to which an intervention will be successful at preventing the event, and (4) the window of opportunity to act to prevent the event. If this is so, then Power seems to suggest that the scale of harm (item 2) and need for speed (item 4) outweigh our degree of knowledge to predict events (item 1) or predict success (item 3).

  3. Isaac:

    In that case I'd prefer "bias toward action," since that's what she'd be saying. I don't like "bias toward belief" because she seems to be conflating actions with beliefs. Because I want to do X, I'll assume A is true. As a statistician, I don't like this form of reasoning.

  4. Sorry to get caught up on an unimportant point, but: I don't think it's right to characterize Obama as an "antiwar candidate." He always said we shouldn't be in Iraq but were right to go into Afghanistan, and that we should get out of Iraq right away but try to finish the job in Afghanistan.

  5. Let me give an alternative interpretation. There's been some interesting work on decision-making when you don't know the probabilities. Under some circumstances, PREFERENCES can be described as using the worst-case probability distribution. So this would make sense if you viewed it as a statement about preferences. Gilboa and Schmeidler is the classic reference and ambiguity is the standard name for it.

  6. This reminds me of the pricing measure in finance. The physical probability measure P says what you think will happen, but because people are risk-averse, the prices they pay for risky assets are distorted. Specifically, risky asset prices are typically lower than their expected payoffs. A common finance trick is to assume the existence of an alternative pricing measure Q, in which the probabilities are distorted so that asset prices appear to be the result of risk-neutral fair bets.

    When you try to extract probabilities from asset prices, then, all you can recover is the pricing measure Q. Mapping back into physical probabilities — which are unobserved in practice, except for things like surveys — is very difficult. Usually this is done by specifying an explicit formula for the market price of risk, which then provides a way to construct the Radon-Nikodym derivative for the change of measure.

    Anyway, it seems that Power has reasoned her way towards a key principle in theoretical asset pricing, even though we statisticians would prefer to work with physical probabilities.

  7. It might clarify Power's reasoning to note that "A Problem from Hell" consists of a series of case studies — including the U.S. government's response to mass atrocities in the Third Reich, Cambodia, Bosnia, and Rwanda — in which the United States, as a matter of historical fact, displayed a bias against believing that a genocide was imminent or underway, even when one was.

    It is hard to argue that U.S. policymakers have shown a significant countervailing bias toward believing mass atrocities were taking place when they weren't. Even in the run-up to the Iraq war, the Bush administration didn't erroneously claim that Saddam Hussein was at the moment engaging in mass atrocities, or would be in the coming days or weeks.

    Thus when Power encourages the U.S. government to have a bias toward belief on the issue of mass killing, she may simply be suggesting something like adjusting the sights on a rifle to account for wind: in order to hit the target (accurate factual conclusions), the U.S. government will apparently have to aim to the side (biasing itself toward belief in mass killing).

    Power's historical analysis may be wrong, or her prescription may be ineffective, but I'd say in her defense at least that there's nothing statistically naive or invalid in correcting for a latent structural bias by intentionally adopting a bias in the opposite direction.

Comments are closed.