Decision analysis and quantum mechanics; or, making a decision about Schroedinger’s cat

One of the mysteries of quantum mechanics (as I recall from my days as a physics major, and from reading Roger Penrose’s books) is the jump from complex probability amplitudes to observed outcomes, and the relation between observation and measurement. Heisenberg, 2-slit experiment, and that cat that’s both alive and dead, until it’s observed, at which point it becomes either alive or dead. As I recall from reading The Emperor’s New Mind, Penrose believed that it was not the act of measurement that collapsed the cat’s wavefunction, but rather the cat’s (or, more precisely, the original electron whose state was uncertain) getting entangled with enough mass that the two possibilities could not simulteously exist.

OK, fine. I haven’t done any physics since 1986 so I can’t comment on this. But it reminded me of something similar in decision making.

Consider a decision that must be made at some unspecified but approximately-known time in the future. For example, a drug company must choose which among a set of projects to pursue (and does not have the resources to pursue all of them). The choice needs not be made immediately, and waiting will allow more information to be gathered to make a more informed decision. At the same time, the clock is ticking and there are losses associated with delay. In addition to the obvious losses (not going full-bore on a promising project leads to a later expected release date, thus fewer lives saved and less money made), waiting ties up other resources of suppliers, customers, etc. [Yes, this example is artificial–I’m sure I can think of something better–but please bear with me on the general point.]

So this is the connection to quantum mechanics. We have a decision, which will ultimately either kill a cat or not, and it makes sense to keep the decision open as long as possible, but at some point it becomes entangled with enough other issues that the decision basically makes itself, or, to put it another way, the decision just has to be made. The act of decision is equivalent to taking a measurement in the physical experiment.

I think there’s something here, although I’m not quite sure what.

P.S. Further discussion here.

7 thoughts on “Decision analysis and quantum mechanics; or, making a decision about Schroedinger’s cat

  1. May I suggest perhaps a better example:

    "My Indecision Is Final: The Rise and Fall of Goldcrest Films" by Jake Eberts. Goldcrest was the film division of Penguin Books, responsible for the likes of Chariots of Fire, Ghandi, and Local Hero.

    On the physics side, here's what's meant to be an educational exposition of quantum trajectories:

    link

    In case the link doesn't make it, it's Am. J. Phys. 2002, 70(7), 719-737

  2. The question you are asking is one familiar to pharmas and oil companies. Real option theory (a variant of option theory) is a tool often used to value the elements of the decision. That is how does uncertatinty associated with technical uncertainty arise and how does uncertainty associated with market uncertainty arise. One is essentially creating a cone possibility that is attentiated by one actions as well as those of others. A number of Finance types have worked on these problems as an alternative to decision trees.

  3. AG: Penrose believed that it was not the act of measurement that collapsed the cat's wavefunction, but rather the cat's (or, more precisely, the original electron whose state was uncertain) getting entangled with enough mass that the two possibilities could not simulteously exist.

    The act of decision is equivalent to taking a measurement in the physical experiment.

    I think there's something here, although I'm not quite sure what.

    DF: The thing you correctly intuit is best described at the pre-Penrosian framing of quantum physics:

    The answer to the question "is light wave or particle" or "what is the position of this electron (or is it neutron?)" depends on how you ask the question.

    The act of measurement interacts with that which is measured.

    Kahneman and Tversky (1979) distinguish "decison utility" and "experience utility." Economists who believe in "revealed preferences" do not make the distinction. K&%T note that in certain situations, DecU doesn't just anticipate ExpU, it actually shapes it.

    Levin & Gaeth (1988) did the first (or at least most clever) experiment on quantum decision psychology. Subjects were non-vegetarians at the University of Iowa. They chomped on burgers described either as 97% lean or 3% fat. Even though they objectively experienced identical outcomes, the 97% lean group rated the burgers as less greasy, more tasty, etc.

    Kahneman and Tversky's "framing effects" and Slovic & Lichtenstein's "preference reversals" are like quantum physics effects. The challenge these experimental results pose to von Neumann & Morgenstern/Savage type axiom systems is as radical as the challenge posed by Heisenberg, Bohr, etc. to Newtonian physics.

    This is not widely accepted. Instead, behavioral economists pretend that cumulative prospect theory (with its kinky value and probability weighting functions) solves the problem posed by quantum decision findings (framing effects and preference reversals).

  4. Thanks to all of you for the references and links. Simon's comment is perhaps closest to what I was thinking of, in the sense that the unmade decision eventually becomes "heavier" and "heavier" until it has to get decided. As time passes, the decision becomes entangled with more and more other decisions until, in the Penrosian sense, it gets "observed."

    Deb's connection to framing effects is interesting. I've heard of some of these studies but hadn't made the connection. I agree that these results are important. However, I think the issue raised in my blog entry is slightly different, in that even if we had no cognitive illusions, and even if there were no framing effects, there is this phenomenon that an unmade decision, if left to fester, can contaminate more and more items with its uncertainty (just as with Schroedinger's cat), until at some point there are so many ways to "observe" it that the decision just gets made.

    To put it another way: actually making a decision closes off the tree, just as, in quantum mechanics, taking a measurement determines the outcome of the 2-slit (or S. cat) experiment. The making of the decision institutes a quantum change, and this is so even under classical utility theory.

    However, maybe these framing effects are relevant, in the sense that, if we were all classically-rational and had no framing effects, availability biases, and so forth, maybe we'd also be able to handle the infinite branching of decision options, and unmade decisions wouldn't seem so upsetting to us.

    To put it another way, when classical decision analysis "works," it only works within some limited framework (for example, studying a particular decision about allocating resources, or evaluating among some fixed menu of options in a medical decision problem). And the quantum-decision-contamination issue arises when a real-life decision becomes coupled with other, observable outcomes, such as whether the cat lives or dies.

  5. There's an excellent article by Guido Firoetti in Metroeconomica last year which applied Glenn Shafer's "Evidence Theory" to the question of making decisions in pharmas. I think it handles the issue a bit more realistically than the real-options approach, although that is very valuable too.

    ahhh here's a version

    Generalised, of course, this problem in decision theory is the issue of liquidity preference, so presumably Keynes had something to say about it.

  6. AG: The choice needs not be made immediately, and waiting will allow more information to be gathered to make a more informed decision. At the same time, the clock is ticking and there are losses associated with delay.

    When there is a choice about when to act, we often need to tradeoff the value of more information against costs associated with delay. A policeman who waits to be 100% sure that a suspect has a gun minimizes the risk of a false alarm (shooting someone unarmed) but increases the risk of a miss (being shot). Delaying action often has informational benefits and practical costs.

    This is widely understood among decision analysts and there are formalizations of quantifying the value of information so that it can be traded off against other attributes.

    This is a difficult tradeoff and a general one, but I don't see it as having much to do with quantum mechanics where the essence is that the act of measurement affects the thing being measured.

Comments are closed.