“It would be as if any discussion of intercontinental navigation required a preliminary discussion of why the evidence shows that the earth is not flat” . . . “a weed that’s grown roots so deep that no amount of cutting and pulling will kill it”

I came across this blog by Jonathan Weinstein that illustrated, once again, some common confusion about ideas of utility and risk. Weinstein writes:

When economists talk about risk, we talk about uncertain monetary outcomes and an individual’s “risk attitude” as represented by a utility function. The shape of the function determines how willing the individual is to accept risk. For instance, we ask students questions such as “How much would Bob pay to avoid a 10% chance of losing $10,000?” and this depends on Bob’s utility function.

This is (a) completely wrong, and (b) known to be completely wrong. To be clear: what’s wrong here is not that economists talk this way. What’s wrong is the identification of risk aversion with a utility function for money. (See this paper from 1998 or a more formal argument from Yitzhak in a paper from 2000.)

It’s frustrating. Everybody knows that it’s wrong to associate a question such as “How much would Bob pay to avoid a 10% chance of losing $10,000?” with a utility function, yet people do it anyway. It’s not Jonathan Weinstein’s fault–he’s just calling this the “textbook definition”–but I guess it is the fault of the people who write the textbooks.

P.S. Yes, yes, I know that I’ve posted on this before. It’s just sooooooo frustrating that I’m compelled to write about it again. Unlike some formerly recurring topics on this blog, I don’t associate this fallacy with any intellectual dishonesty. I think it’s just an area of confusion. The appealing but wrong equation of risk aversion with nonlinear utility functions is a weed that’s grown roots so deep that no amount of cutting and pulling will kill it.

P.P.S. To elaborate slightly: The equation of risk aversion with nonlinear utility is empirically wrong (people are much more risk averse for small sums than could possibly make sense under the utility model) and conceptually wrong (risk aversion is an attitude about process rather than outcome).

P.P.P.S. I’ll have to write something more formal about this some time . . . in the meantime, let me echo the point made by many others that the whole idea of a “utility function for money” is fundamentally in conflict with the classical axiom of decision theory that preferences should depend only on outcomes, not on intermediate steps. Money’s value is not in itself but rather in what it can do for you, and in the classical theory, utilities would be assigned to the ultimate outcomes. (But even if you accept the idea of a “utility of money” as some sort of convenient shorthand, you still can’t associate it with attitudes about risky gambles, for the reasons discussed by Yitzhak and myself and which are utterly obvious if you ever try to teach the subject.)

P.P.P.P.S. Yes, I recognize the counterargument: that if this idea is really so bad and yet remains so popular, it must have some countervailing advantages. Maybe so. But I don’t see it. It seems perfectly possible to believe in supply and demand, opportunity cost, incentives, externalities, marginal cost and benefits, and all the rest of the package–without building it upon the idea of a utility function that doesn’t exist. To put it another way, the house stands up just fine without the foundations. To extent that the foundations hold up at all, I suspect they’re being supported by the house.

20 thoughts on ““It would be as if any discussion of intercontinental navigation required a preliminary discussion of why the evidence shows that the earth is not flat” . . . “a weed that’s grown roots so deep that no amount of cutting and pulling will kill it”

  1. "Money's value is not in itself but rather in what it can do for you, and in the classical theory, utilities would be assigned to the ultimate outcomes."

    In all utility models, given assumptions about prices, etc, you can easily calculated the indirect utility for different levels of "money" (income), so I don't think these two concepts are at odds with each other.

    Where there is a disconnect (and I think you touch on this) is that risk aversion in Econ 101 is defined over income, where most of the experimental games we play are all defined over small gains or losses, which are hard to relate back to overall income.

    I'd say that's more the fault of our inability to recreate things in the lab…

    Also, have you considered prospect theory (i.e. Kahneman and Tversky)? It's not taught in Econ 101, it's not grounded in rational preferences, but it still risk averse behaviour to a concave value function (in gains).

  2. Matt:

    Yes, I think prospect theory is great. It's not perfect, but I think that, as a default model of uncertainty/loss/risk aversion, it's much better than the curving utility model that is unfortunately the standard.

  3. I understand the frustration when people simply follow the theory without considering its empirical validity.

    But I recall that when faced with real monetary gains/losses that the model does "much better" and that experimental evidence such as what was conducted in your class might be biased. Quotes are because the memory is very fuzzy; this is outside my research focus.

    From what I gather from your links, part of your complaint is that what economists mean by risk adverse and what non-economists interpret risk adverse are two different concepts. Is this really much of an issue? Most people actively working on an issue should be able to get a grip on the difference quickly. Or are you worried that people will abuse the disconnect?

  4. From p. 172 of your 1998 "Bayes Demo" paper: "Where is the mistake? It is that fearing uncertainty is not necessarily the same as "risk aversion" in utility theory: the latter can be expressed as a concave utility function for money, whereas the former implies behavior that is not consistent with any utility function."

    I haven't had time to look at the other relevant references (e.g., Yitzhak), but the 1998 quote is in direct opposition to this blog post, right? As far as I can tell, you explicitly say the opposite in the post-post-script, for example.

  5. A comment on the utility value of money:

    This is (generally) shorthand for a more technical concept: the change in the value of the lagrangian multiplier of the budget constraint as you give a maximizing agent more wealth (ie make the constraint less stringent).

    A standard intro level grad exercise in economics is showing that there is a representation of a utility maximization problem as a "indirect utility" function that takes wealth and prices as arguments. You can further show that the derivative of the indirect utility function with respect to wealth (measured in dollars) exactly equals the lagrange multiplier of the maximization problem. THIS function is what economists are referring to when they describe a utility function of money, and it has a clean and well-defined mapping back to the fundamentals of the problem.

    The question then is what is the shadow-value of an additional dollar of income, and how does that vary with the level of income. This directly follows from mapping the money into actual outcomes. These results can then be extrapolated into risk preferences given Von-Neumann expected utility functions. We can argue about where expected utility is a good or bad approximation but the concept of utility value of money is at the very least well-defined.

  6. Geof:

    The problem is that "risk aversion" describes several different sorts of behaviors. I think the term is used sloppily by economists and non-economists (but in different ways).

    Regarding "real monetary gains and losses": I think that in some situations with real monetary gains and losses, people's decisions are fit reasonably well by the utility model; in other such situations, not so much. The model is useful in some settings; my problem is with people who view it as foundational (in a descriptive sense).

    Noah:

    I think that utility theory can be useful; we use it, for example, in our decision analysis for radon measurement and remediation. My problem is when people take a useful idea such as the nonlinear utility function and apply it to problems where it is clearly inappropriate.

    Take a look at the Weinstein quote above: "we ask students questions such as "How much would Bob pay to avoid a 10% chance of losing $10,000?" and this depends on Bob's utility function." My response is: No, no no, for the reasons discussed in my linked article.

    Michael:

    Classical utility theory is a special case of prospect theory in which certain parameters are exactly zero. My impression is that, as a descriptive theory, you'll do better to allow these parameters to differ from zero. As a normative theory, though, prospect theory isn't so great. But that's partly the point–that to describe actual decisions, it can useful to apply a theory that does not give good normative recommendations.

  7. I don't really agree with the critique here, Andrew. As another commenter mentioned, it is straightforward to map utility functions (over objects) into an indirect utility function defined over money. The intuitive way to do this is just to ask "would you prefer bundle X or Y amount of dollars?" As long as utility is monotonic in the goods, then the amount of money you would spend on a bundle is increasing in the utility you get from that bundle, and hence the amount of money you would spend is, itself, an indirect utility function.

    More seriously, "risk aversion" to economists generally means Arrow-Pratt risk aversion, which is defined as -u''/u'. If an agent satisfies von-Neumann/Morganstern axioms, then they have a utility unique up to affine transformations, and hence the constant of integration can be ignored and the original utility function can be recovered simply by knowing the Arrow-Pratt function. That is, Weinstein's statement is fine for EU maximizers. EU maximizers are, by implication of the EU axioms, not ambiguity-averse, do not suffer loss aversion, etc.

    Looking at the earlier posts you linked to, I think you're saying something like "ambiguity aversion is important too – and indeed, many other true things about human behavior other than concave utility can explain behavior under risk." And this is true! But I don't think this is a valid critique of the type of offhand implicit use of expected utility that Weinstein used. I think everyone agrees that declining utility of money is basically true in the real world, and that a model of human behavior under risk should take that fact into account. Every individual theory, and indeed every collection of theories that make up a paradigm, is going to ignore some true things about the world – this is the definition of a model. It's going a bit far, I think, to say that this is "completely wrong".

  8. Afinetheorem:

    I agree that, to the extent they model behavior (either normatively or descriptively), people's utility functions for money are declining. But, as I wrote in my paper (and as Yitzhak wrote in more detail a couple years later), the decline in the utility function is much much much too slow to explain people's uncertainty aversion over small-scale gambles.

    That's why I think Weinstein is completely wrong when he writes, "we ask students questions such as 'How much would Bob pay to avoid a 10% chance of losing $10,000?' and this depends on Bob's utility function."

    As I wrote in one of the above comments, I think utility theory is a wonderful normative theory and perhaps, in some settings, a useful descriptive theory. The problem is when a theorist's first reaction, when seeing a behavior, is to model it with a utility function. Sometimes this is reasonable, sometimes it's not.

    As many textbook writers have explained over the years, the normative point of utility theory is not, "You should act so as to maximize your expected utility" but rather "You should act in a way consistent with the rationality axioms, thus acting as if you have a utility function that you are maximizing." The mistake comes when people stretch the theory to beyond its descriptive limit and say that everyone actually is acting in that way.

    In some cases, you can take a behavior that apparently violates the rationality axioms and, by a judicious construction of a utility function, place that behavior within the expected-utility framework. And that can be helpful in quantifying tradeoffs. We have an example in chapter 22 of Bayesian Data Analysis of a decision problem involving a balance between dollars and lives, and I think utility theory was helpful for us in understanding the problem and giving decision recommendations.

    But in other cases, such as the example given in my teaching article, utility theory simply not fit, and it's an example of modeling where the epicycles become more prominent than the actual orbits whose modeling they are brought in to fix. Even in these settings, expected utility, when reasonably interpreted, can be a useful normative model, though. The challenge is in identifying which aspects of the behavior are furthering the actor's goals and which actions are getting in the way, and formulating the utility model and decision recommendations appropriately. This sort of model won't work if too loose (tautologically defining all actions and preferences as rational) and it also won't work if it's too loose (for example, if the utility function for money is restricted to be linear). The challenge is to find the right balance for the particular problem under study.

  9. In your $20/30/40 attitude example of 1998 paper, I think that the assumption that p is constant regardless of x is unrealistic. When we talk about $1billion, we are indifferent between $1billion+$1 and $1billion-$1, so p should be almost 0.5.
    If we set p as p(x)=0.5+0.05/(x/10), i.e., p=0.55 when x=$10, p=0.525 when x=$20, p=0.517 when x=$30 and so forth, then U(x) does not converge as x approaches infinity, although is still concave. So, as far as this example is concerned, the problem seems to be in the formulation of utility function, not in the concept of utility function itself.

  10. Himaginary:

    Nope. Just take one of these statements. For example, a person being indifferent between an extra $30 and a lottery with .55 chance of getting $20 and a .45 chance of getting $40. That one preference alone, if (mistakenly) taken to represent a preference based on a utility function for money, induces a utility function with such an extreme curvature to make no sense.

  11. I have to admit I'm a little confused after reading section 5 in the link as well.

    Is what you are saying as simple as the fact that there is a difference between the curvature of a utility function under certainty on the one hand, and a similar non-linear tradeoff having to do with uncertainty on the other?

    I.e. let's say it takes $100 to produce the first n units of utility, but an additional $150 to produce the n units. For sure that is curvature, i.e. declining marginal utility. (And also, I am thinking of the $100 in terms of the utility of consumption you can buy with the $100)

    With uncertainty, a 50/50 chance of $250 or $0 might be considered equivalent to the utility of a certain $100. The fact that it takes an additional $150 on the upside to compensation for the lack of $100 on the downside is a very different animal from the curvature described in the discussion of certainty. I.e. even if utility under certainty were strictly linear, one would still need a theory to describe the certainty / uncertainty tradeoff?

  12. Thank you for responding to my comment.
    But preference is, after all, an economic variable. So, like all other economic variables, it is subject to observation error. I don't think one observation error can disprove the whole existence of underlying coherent, concave, and non-convergent utility function. And if such function exists, how can you be so sure that it is useless for economic analysis?

  13. Robert:

    What I'm saying is that people's responses to questions about decision problems–and even people's actual decisions–reflect risk/uncertainty aversion in a way that is not at all consistent with any utility function.

    Himaginary:

    I have no problem with measurement error being part of the model. But I do object to the quotation that got things started: "The shape of the function determines how willing the individual is to accept risk. For instance, we ask students questions such as 'How much would Bob pay to avoid a 10% chance of losing $10,000?' and this depends on Bob's utility function."

  14. @Andrew;

    No, prospect theory is not a more general theory of utility theory.

    Prospect theory deals only with the atomic or elementary choices – it was never designed to deal with the decompostional rules which make up vNM or the extensions.

    The are many alternatives to vNM cardinalization are described in Peter Fishburn's book, non-linear preference theory.

  15. thanks andrew! i was silly and did not search the blog for yitzhak. instead i searched on google for yitzhak and utility, and so only managed to find this very blog entry and the israeli pm.

  16. Hi, I guess I should check the incoming links on our blog more often. Here I am a month late, but better than never.

    Andrew, if you read the whole post, you know that I was briefly introducing the standard setup, not to promote it, but quite the opposite, to point out that it gives an overly narrow definition of risk. My point, which I won't detail here, is ultimately orthogonal to your well-known points (which are valid also), so I think I had something to say even to someone who considers expected utility discredited. To take your metaphor from the title, I was not discussing intercontinental navigation, but city planning, and for this a flat-earth model is actually superior since it avoids carrying around small terms which distract from the analysis.

    In the brief quote that you included, I was simply making a factual statement about the kinds of exercises that students do. In such exercises, Bob's decision certainly "depends on his utility function," because Bob obviously isn't a real person but a mathematical object. Whether Bob is a good or poor model of three-dimensional human beings is a totally separate question, and one I certainly wasn't weighing in on in that paragraph. So, it is a bit jarring to see this rather uncontroversial paragraph, which doesn't even hint at what the actual point of my article was, followed by "this is completely wrong," with no further reference made to anything which actually was the topic of my post. I appreciate that you said expected-utility theory isn't my "fault," but I think you still may have left many readers with the impression that I was promoting the orthodoxy, oblivious to any flaws, when my post goes in quite a different direction.

    You should know that at economic theory seminars many, perhaps most models use expected utility as one component; that everyone in the room is aware of the critiques you mention; and that no one stalks out of the room when the model is introduced, muttering about a flat earth. This isn't because we are slaves to an orthodoxy, but because we know that when there are many other complications, it may be a good idea to look at a problem on a flat earth before proceeding to spheres or tori.

    For the curious, my post is here: http://theoryclass.wordpress.com/2010/06/29/risky

  17. Jonathan:

    I realized that my critiques are not new–my own discussion of the point, which I link to above, appeared in an article about teaching, and I did not claim it as a research contribution–and I also realized that the point made in your blog was orthogonal to mine. I regret any confusion that might have arisen, if it seemed that I implied otherwise.

    As a frequent user of linear regressions, normal distributions, and various other conventional assumptions, I take your point that it can be good idea to start one's work on a flat earth and wait on modeling the curvature until it's necessary to put in the effort to do so.

    And, as a current teacher of an introductory course, I fully recognize the benefits of simplifying a model when introducing it to students. It's only after the students can really work with the simple model that they can appreciate its flaws in a real way (which is one of the points of that teaching article I wrote).

    In my view, your comment is about 90% correct. The only part I possibly disagree is in the question of when an economic modeler should get off the utility-function-for-money bus. I've had many conversations with economists who should know better–who do know better–who are inclined to explain behavior such as "indifference between $30 and a (55% chance of $40, 45% chance of $20)" using a curving utility function for money, only giving up on the idea after a long, drawn-out derivation. (I think similar frustrations might have motivated Yitzhak to write his 2000 paper on this topic.)

    I agree there are many places where utility analysis is great–see chapter 22 of Bayesian Data Analysis for some examples–but there are some places where it's clearly inappropriate from the start, including various problems involving lotteries and high levels of uncertainty. Utility functions are great, but I don't think it makes sense to us a sharply curving nonlinear utility function for money to explain the general phenomenon that people don't like uncertainty. (And, yes, I recognize that "don't like uncertainty" is a crude summary of a complicated psychological phenomenon.)

    There I go, devoting nearly half my response to the 10% I disagree with. So let me conclude by emphasizing that I agree with most of what you wrote above.

Comments are closed.