Recently in Economics Category

This post is by Phil.

I love this post by Jialan Wang. Wang "downloaded quarterly accounting data for all firms in Compustat, the most widely-used dataset in corporate finance that contains data on over 20,000 firms from SEC filings" and looked at the statistical distribution of leading digits in various pieces of financial information. As expected, the distribution is very close to what is predicted by Benford's Law.

Very close, but not identical. But does that mean anything? Benford's "Law" isn't really a law, it's more of a rule or principle: it's certainly possible for the distribution of leading digits in financial data --- even a massive corpus of it --- to deviate from the rule without this indicating massive fraud or error. But, aha, Wang also looks at how the deviation from Benford's Law has changed with time, and looks at it by industry, and this is where things get really interesting and suggestive. I really can't summarize any better than Wang did, so click on the first link in this post and go read it. But come back here to comment!

I think I'm starting to resolve a puzzle that's been bugging me for awhile.

Pop economists (or, at least, pop micro-economists) are often making one of two arguments:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with "why do they do that?" sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there's some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with "we can do better" claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.

The trick is knowing whether you're gonna get 1 or 2 above. They're complete opposites!

Our story begins . . .

Here's a quote from Steven Levitt:

One of the easiest ways to differentiate an economist from almost anyone else in society is to test them with repugnant ideas. Because economists, either by birth or by training, have their mind open, or skewed in just such a way that instead of thinking about whether something is right or wrong, they think about it in terms of whether it's efficient, whether it makes sense. And many of the things that are most repugnant are the things which are indeed quite efficient, but for other reasons -- subtle reasons, sometimes, reasons that are hard for people to understand -- are completely and utterly unacceptable.

As statistician Mark Palko points out, Levitt is making an all-too-convenient assumption that people who disagree with him are disagreeing because of closed-mindedness. Here's Palko:

There are few thoughts more comforting than the idea that the people who disagree with you are overly emotional and are not thinking things through. We've all told ourselves something along these lines from time to time.

I could add a few more irrational reasons to disagree with Levitt: political disagreement (on issues ranging from abortion to pollution) and simple envy at Levitt's success. (It must make the haters even more irritated that Levitt is, by all accounts, amiable, humble, and a genuinely nice guy.) In any case, I'm a big fan of Freakonomics.

But my reaction to reading the above Levitt quote was to think of the puzzle described at the top of this entry. Isn't it interesting, I thought, that Levitt is identifying economists as rational and ordinary people as irrational. That's argument 2 above. In other settings, I think we'd hear him saying how everyone responds to incentives and that what seems like "efficiency" to do-gooding outsiders is actually not efficient at all. The two different arguments get pulled out as necessary.

The set of all sets that don't contain themselves

Which in turn reminds me of this self-negating quote from Levitt protoge Emily Oster:

anthropologists, sociologists, and public-health officials . . . believe that cultural differences--differences in how entire groups of people think and act--account for broader social and regional trends. AIDS became a disaster in Africa, the thinking goes, because Africans didn't know how to deal with it.

Economists like me [Oster] don't trust that argument. We assume everyone is fundamentally alike; we believe circumstances, not culture, drive people's decisions, including decisions about sex and disease.

I love this quote for its twisted logic. It's Russell's paradox all over again. Economists are different from everybody else, because . . . economists "assume everyone is fundamentally alike"! But if everyone is fundamentally alike, how is it that economists are different "from almost anyone else in society"? All we can say for sure is that it's "circumstances, not culture." It's certainly not "differences in how entire groups of people think and act"--er, unless these groups are economists, anthropologists, etc.

OK, fine. I wouldn't take these quotations too seriously; they're just based on interviews, not careful reflection. My impression is that these quotes come from a simple division of the world into good and bad things:

- Good: economists, rationality, efficiency, thinking the unthinkable, believing in "circumstances"

- Bad: anthropologists, sociologists, public-health officials, irrationality, being deterred by repugnant ideas, believing in "culture"

Good is entrepreneurs, bad is bureaucrats. At some point this breaks down. For example, if Levitt is hired by a city government to help reform its school system, is he a rational, taboo-busting entrepreneur (a good thing) or a culture-loving bureaucrat who thinks he knows better than everybody else (a bad thing)? As a logical structure, the division into Good and Bad has holes. But as emotionally-laden categories ("fuzzy sets," if you will), I think it works pretty well.

The solution to the puzzle

OK, now to return to the puzzle that got us started. How is it that economics-writers such as Levitt are so comfortable flipping back and forth between argument 1 (people are rational) and argument 2 (economists are rational, most people are not)?

The key, I believe, is that "rationality" is a good thing. We all like to associate with good things, right? Argument 1 has a populist feel (people are rational!) and argument 2 has an elitist feel (economists are special!). But both are ways of associating oneself with rationality. It's almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you're celebrating the rationality of others.


I'm not saying that arguments based on rationality are necessarily wrong in particular cases. (I can't very well say that, given that I wrote an article on why it can be rational to vote.) I'm just trying to understand how pop-economics can so rapidly swing back and forth between opposing positions. And I think it's coming from the comforting presence of rationality and efficiency in both formulations. It's ok to distinguish economists from ordinary people (economists are rational and think the unthinkable, ordinary people don't) and it's also ok to distinguish economists from other social scientists (economists think ordinary people are rational, other social scientists believe in "culture"). You just have to be careful not to make both arguments in the same paragraph.

P.S. Statisticians are special because, deep in our bones, we know about uncertainty. Economists know about incentives, physicists know about reality, movers can fit big things in the elevator on the first try, evolutionary psychologists know how to get their names in the newspaper, lawyers know you should never never never talk to the cops, and statisticians know about uncertainty. Of that, I'm sure.

Macro causality


David Backus writes:

This is from my area of work, macroeconomics. The suggestion here is that the economy is growing slowly because consumers aren't spending money. But how do we know it's not the reverse: that consumers are spending less because the economy isn't doing well. As a teacher, I can tell you that it's almost impossible to get students to understand that the first statement isn't obviously true. What I'd call the demand-side story (more spending leads to more output) is everywhere, including this piece, from the usually reliable David Leonhardt.

This whole situation reminds me of the story of the village whose inhabitants support themselves by taking in each others' laundry. I guess we're rich enough in the U.S. that we can stay afloat for a few decades just buying things from each other?

Regarding the causal question, I'd like to move away from the idea of "Does A causes B or does B cause A" and toward a more intervention-based framework (Rubin's model for causal inference) in which we consider effects of potential actions. See here for a general discussion. Considering the example above, a focus on interventions clarifies some of the causal questions. For example, if you want to talk about the effect of consumers spending less, you have to consider what interventions you have in mind that would cause consumers to spend more. One such intervention is the famous helicopter drop but there are others, I assume. Conversely, if you want to talk about the poor economy affecting spending, you have to consider what interventions you have in mind to make the economy go better.

In that sense, instrumental variables are a fundamental way to think of just about all causal questions of this sort. You start with variables A and B (for example, consumer spending and economic growth). Instead of picturing A causing B or B causing A, you consider various treatments that can affect both A and B.

All my discussion is conceptual here. As I never tire of saying, my knowledge of macroeconomics hasn't developed since I took econ class in 11th grade.

Dave Backus points me to this review by anthropologist Mike McGovern of two books by economist Paul Collier on the politics of economic development in Africa. My first reaction was that this was interesting but non-statistical so I'd have to either post it on the sister blog or wait until the 30 days of statistics was over. But then I looked more carefully and realized that this discussion is very relevant to applied statistics.

Here's McGovern's substantive critique:

Much of the fundamental intellectual work in Collier's analyses is, in fact, ethnographic. Because it is not done very self-consciously and takes place within a larger econometric rhetoric in which such forms of knowledge are dismissed as "subjective" or worse still biased by the political (read "leftist") agendas of the academics who create them, it is often ethnography of a low quality. . . .

Despite the adoption of a Naipaulian unsentimental-dispatches-from-the-trenches rhetoric, the story told in Collier's two books is in the end a morality tale. The tale is about those countries and individuals with the gumption to pull themselves up by their bootstraps or the courage to speak truth to power, and those power- drunk bottom billion elites, toadying sycophants, and soft-hearted academics too blinded by misplaced utopian dreams to recognize the real causes of economic stagnation and civil war. By insisting on the credo of "just the facts, ma'am," the books introduce many of their key analytical moves on the sly, or via anecdote. . . . This is one explana- tion of how he comes to the point of effectively arguing for an international regime that would chastise undemocratic leaders by inviting their armies to oust them--a proposal that overestimates the virtuousness of rich countries (and poor countries' armies) while it ignores many other potential sources of political change . . .

My [McGovern's] aim in this essay is not to demolish Collier's important work, nor to call into question development economics or the use of statistics. . . . But the rhetorical tics of Collier's books deserve some attention. . . . if his European and North American audiences are so deeply (and, it would seem, so easily) misled, why is he quick to presume that the "bottom billion" are rational actors? Mightn't they, too, be resistant to the good sense purveyed by economists and other demystifiers?

Now to the statistical modeling, causal inference, and social science. McGovern writes of Collier (and other quantitatively-minded researchers):

Portions of the two books draw on Collier's academic articles to show one or several intriguing correlations. Having run a series of regressions, he identifies counterintuitive findings . . . However, his analysis is typically a two-step process. First, he states the correlation, and then, he suggests an explanation of what the causal process might be. . . . Much of the intellectual heavy lifting in these books is in fact done at the level of implication or commonsense guessing.

This pattern (of which McGovern gives several convincing examples) is what statistician Kaiser Fung calls story time--that pivot from the quantitative finding to the speculative explanation My favorite recent example remains the recent claim that "a raise won't make you work harder." As with McGovern's example, the "story time" hypothesis there may very well be true (under some circumstances) but the statistical evidence doesn't come close to proving the claim or even convincing me of its basic truth.

The story of story time

But story time can't be avoided. On one hand, there are real questions to be answered and real decisions to be made in development economics (and elsewhere), and researchers and policymakers can't simply sit still and say they can't do anything because the data aren't fully persuasive. (Remember the first principle of decision analysis: Not making a decision is itself a decision.)

From the other direction, once you have an interesting quantitative finding, of course you want to understand it, and it makes sense to use all your storytelling skills here. The challenge is to go back and forth between the storytelling and the data. You find some interesting result (perhaps an observational data summary, perhaps an analysis of an experiment or natural experiment), this motivates a story, which in turn suggests some new hypotheses to be studied. Yu-Sung and I were just talking about this today in regard to our article on public opinion about school vouchers.

The question is: How do quantitative analysis and story time fit into the big picture? Mike McGovern writes that he wishes Paul Collier had been more modest in his causal claims, presenting his quantitative findings as "intriguing and counterintuitive correlations" and frankly recognizing that exploration of these correlations requires real-world understanding, not just the rhetoric of hard-headed empiricism.

I agree completely with McGovern--and I endeavor to follow this sort of modesty in presenting the implications of my own applied work--and I think it's a starting point for Coliier and others. Once they recognize that, indeed, they are in story time, they can think harder about the empirical implications of their stories.

The trap of "identifiability"

As Ole Rogeberg writes (following up on ideas of James Heckman and others), the search for clean identification strategies in social research can be a trap, in that it can result in precise but irrelevant findings tied to broad but unsupported claims. Rogeberg has a theoretical model explaining how economists can be so rigorous in parts of their analysis and so unrigorous in others. Rogeberg sounds very much like McGovern when he writes:

The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other.

The virtue of description

Descriptive statistics is not just for losers. There is value in revealing patterns in observational data, correlations or predictions that were not known before. For example, political scientists were able to forecast presidential election outcomes using information available months ahead of time. This has implications about political campaigns--and no causal identification strategy was needed. Countries with United Nations peacekeeping take longer, on average, to revert to civil war, compared to similarly-situated countries without peacekeeping. A fact worth knowing, even before the storytelling starts. (Here's the link, which happens to also include another swipe at Paul Collier, this time from Bill Easterly.)

I'm not convinced by every correlation I see. For example, there was this claim that warming increases the risk of civil war in Africa. As I wrote at the time, I wanted to see the time series and the scatterplot. A key principle in applied statistics is that you should be able to connect between the raw data, your model, your methods, and your conclusions.

The role of models

In a discussion of McGovern's article, Chris Blattman writes:

Economists often take their models too seriously, and too far. Unfortunately, no one else takes them seriously enough. In social science, models are like maps; they are useful precisely because they don't explain the world exactly as it is, in all its gory detail. Economic theory and statistical evidence doesn't try to fit every case, but rather find systematic tendencies. We go wrong to ignore these regularities, but we also go wrong to ignore the other forces at work-especially the ones not so easily modeled with the mathematical tools at hand.

I generally agree with what Chris writes, but here I think he's a bit off by taking statistical evidence and throwing it in the same category as economic theory and models. My take-away from McGovern is that the statistical evidence of Collier et al. is fine; the problem is with the economic models which are used to extrapolate from the evidence to the policy recommendations. I'm sure Chris is right that economic models can be useful in forming and testing statistical hypotheses, but I think the evidence can commonly be assessed on its own terms. (This is related to my trick of understanding instrumental variables by directly summarizing the effect of the instrument on the treatment and the outcome without taking the next step and dividing the coefficients.)

To put it another way: I would separate the conceptually simple statistical models that are crucial to understanding evidence in any complex-data setting, from the economics (or, more generally, social science) models that are needed to apply empirical correlations to real-world decisions.

Ole Rogeberg writes:

Here`s a blogpost regarding a new paper (embellished with video and an essay) where a colleague and I try to come up with an explanation for why the discipline of economics ends up generating weird claims such as those you`ve blogged on previously regarding rational addiction.

From Ole's blog:

The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other. . . .

Our explanation can be put in terms of the research process as an "evolutionary" process: Hunches and ideas are turned into models and arguments and papers, and these are "attacked" by colleagues who read drafts, attend seminars, perform anonymous peer-reviews or respond to published articles. Those claims that survive this process are seen as "solid" and "backed by research." If the "challenges" facing some types of claims are systematically weaker than those facing other types of claims, the consequence would be exactly what we see: Some types of "accepted" claims would be of high standard (e.g., formal, theoretical models and certain types of statistical fitting) while other types of "accepted claims" would be of systematically lower quality (e.g., claims about how the real world actually works or what policies people would actually be better off under).

In our paper, we pursue this line of thought by identifying four types of claims that are commonly made - but that require very different types of evidence (just as the Pythagorean theorem and a claim about the permeability of shale rock would be supported in very different ways). We then apply this to the literature on rational addiction and argue that this literature has extended theory and that, to some extent, it is "as if" the market data was generated by these models. However, we also argue that there is (as good as) no evidence that these models capture the actual mechanism underlying an addiction or that they are credible, valid tools for predicting consumer welfare under addictions. All the same - these claims have been made too - and we argue that such claims are allowed to piggy-back on the former claims provided these have been validly supported. We then discuss a survey mailed to all published rational addiction researchers which provides indicative support - or at least is consistent with - the claim that the "culture" of economics knows the relevant criteria for evaluating claims of pure theory and statistical fit better than it knows the relevant criteria for evaluating claims of causal or welfare "insight". . . .

If this explanation holds up after further challenges and research and refinement, it would also provide a way of changing things - simply by demanding that researchers state claims more explicitly and with greater precision, and that we start discussing different claims separately and using the evidence relevant to each specific one. Unsupported claims about the real world should not be something you`re allowed to tag on at the end of a work as a treat for competently having done something quite unrelated.

Or, as Kaiser Fung puts it, "story time." (For a recent example, see the background behind the claim that "a raise won't make you work harder.")

This (Ole's idea) is just great: moving from criticism to a model and pointing the way forward to possible improvement.

Tyler Cowen writes:

Texas has begun to enforce [a law regarding parallel parking] only recently . . . Up until now, of course, there has been strong net mobility into the state of Texas, so was the previous lack of enforcement so bad?

I care not at all about the direction in which people park their cars and I have no opinion on this law, but I have to raise an alarm at Cowen's argument here.

Let me strip it down to its basic form:

1. Until recently, state X had policy A.

2. Up until now, there has been strong net mobility into state X

3. Therefore, the presumption is that policy A is ok.

In this particular case, I think we can safely assume that parallel parking regulations have had close to zero impact on the population flows into and out of Texas. More generally, I think logicians could poke some holes into the argument that 1 and 2 above imply 3. For one thing, you could apply this argument to any policy in any state that's had positive net migration.

Hair styling licensing in Florida, anyone?

P.S. I'm not trying to pick on Cowen here. Everybody makes mistakes. The most interesting logical errors are the ones that people make by accident, without reflection. So I thought it could be helpful to point this one out.

P.P.S. Commenters suggest Cowen was joking. In which case I applaud him for drawing attention to this common error in reasoning,

I keep encountering the word "Groupon"--I think it's some sort of commercial endeavor where people can buy coupons? I don't really care, and I've avoided googling the word out of a general animosity toward our society's current glorification of get-rich-quick schemes. (As you can tell, I'm still bitter about that whole stock market thing.)

Anyway, even without knowing what Groupon actually is, I enjoyed this blog by Kaiser Fung in which he tries to work out some of its economic consequences. He connects the statistical notion of counterfactuals to the concept of opportunity cost from economics. The comments are interesting too.

Xian points me to an article by retired college professor David Rubinstein who argues that college professors are underworked and overpaid:

After 34 years of teaching sociology at the University of Illinois at Chicago, I [Rubinstein] recently retired at age 64 at 80 percent of my pay for life. . . . But that's not all: There's a generous health insurance plan, a guaranteed 3 percent annual cost of living increase, and a few other perquisites. . . . I was also offered the opportunity to teach as an emeritus for three years, receiving $8,000 per course . . . which works out to over $200 an hour. . . .

You will perhaps not be surprised to hear that I had two immediate and opposite reactions to this:

1. Hey--somebody wants to cut professors' salaries. Stop him!

2. Hey--this guy's making big bucks and doesn't do any work--that's not fair! (I went online to find David Rubinstein's salary but it didn't appear in the database. So I did the next best thing and looked up the salaries of full professors in the UIC sociology department. The salaries ranged from 90K to 135K. That really is higher than I expected, given that (a) sociology does not have a reputation as being a high-paying field, and (b) UIC is OK but it's not generally considered a top university.

Having these two conflicting reactions made me want to think about this further.

Under the headline, "A Raise Won't Make You Work Harder," Ray Fisman writes:

To understand why it might be a bad idea to cut wages in recessions, it's useful to know how workers respond to changes in pay--both positive and negative changes. Discussion on the topic goes back at least as far as Henry Ford's "5 dollars a day," which he paid to assembly line workers in 1914. The policy was revolutionary at the time, as the wages were more than double what his competitors were paying. This wasn't charity. Higher-paid workers were efficient workers--Ford attracted the best mechanics to his plant, and the high pay ensured that employees worked hard throughout their eight-hour shifts, knowing that if their pace slackened, they'd be out of a job. Raising salaries to boost productivity became known as "efficiency wages."

So far, so good. Fisman then moves from history and theory to recent research:

How much gift exchange really matters to American bosses and workers remained largely a matter of speculation. But in recent years, researchers have taken these theories into workplaces to measure their effect on employee behavior.

In one of the first gift-exchange experiments involving "real" workers, students were employed in a six-hour library data-entry job, entering title, author, and other information from new books into a database. The pay was advertised as $12 an hour for six hours. Half the students were actually paid this amount. The other half, having shown up expecting $12 an hour, were informed that they'd be paid $20 instead. All participants were told that this was a one-time job--otherwise, the higher-paid group might work harder in hopes of securing another overpaying library gig.

The experimenters checked in every 90 minutes to tabulate how many books had been logged. At the first check-in, the $20-per-hour employees had completed more than 50 books apiece, while the $12-an-hour employees barely managed 40 each. In the second 90-minute stretch, the no-gift group maintained their 40-book pace, while the gift group fell from more than 50 to 45. For the last half of the experiment, the "gifted" employees performed no better--40 books per 90-minute period--than the "ungifted" ones.

The punchline, according to Fisman:

The goodwill of high wages took less than three hours to evaporate completely--hardly a prescription for boosting long-term productivity.

What I'm wondering is: How seriously should we use an experiment on one-shot student library jobs (or another study, in which short-term employees were rewarded "with a surprise gift of thermoses"), to make general conclusions such as "Raises don't make employees work harder."

What I'm worried about here isn't causal identification--I'm assuming these are clean experiments--but the generalizability to the outside world of serious employment.

Fisman writes:

All participants were told that this was a one-time job--otherwise, the higher-paid group might work harder in hopes of securing another overpaying library gig.

This seems like a direct conflict between the goals of internal and external validity, especially given that one of the key reasons to pay someone more is to motivate them to work harder to secure continuation of the job, and to give them less incentive to spend their time looking for something new.

I'm not saying that the study Fisman cited is useless, just that I'm surprised that he's so careful to consider internal validity issues yet seems to have no problem extending the result to the whole labor force.

These are just my worries. Ray Fisman is an excellent researcher here at the business school at Columbia--actually, I know him and we've talked about statistics a couple times--and I'm sure he's thought about these issues more than I have. So I'm not trying to debunk what he's saying, just to add a different perspective.

Perhaps Fisman's b-school background explains why his studies all seem to be coming from the perspective of the employer: it's the employer who decides what to do with wages (perhaps "presenting the cut as a temporary measure and by creating at least the illusion of a lower workload") and the employees who are the experimental subjects.

Fisman's conclusion:

If we can find other ways of overcoming the simmering resentment that naturally accompanies wage cuts, workers themselves will be better for it in the long run.

The "we" at the beginning of the sentence does not seem to be the same as the "workers" at the end of the sentence. I wonder if there is a problem with designing policies in this unidirectional fashion.

A couple things in this interview by Andrew Goldman of Larry Summers currently irritated me.

I'll give the quotes and then explain my annoyance.

Stan will make a total lifetime profit of $0, so we can't be sued!

About 15 years ago I ran across this book and read it, just for fun. Rhoads is a (nonquantitative) political scientist and he's writing about basic economic concepts such as opportunity cost, marginalism, and economic incentives. As he puts it, "welfare economics is concerned with anything any individual values enough to be willing to give something up for it."

The first two-thirds of the book is all about the "economist's view" (personally, I'd prefer to see it called the "quantitative view") of the world and how it applies to policy issues. The quick message, which I think is more generally accepted now than in the 1970s when Rhoads started working on this book, is that free-market processes can do better than governmental rules in allocating resources. Certain ideas that are obvious to quantitative people--for example, we want to reduce pollution and reduce the incentives to pollute, but it does not make sense to try to get the level of a pollutant all the way down to zero if the cost is prohibitively high--are not always so obvious to others. The final third of Rhoads's book discusses difficulties economists have had when trying to carry their dollar-based reasoning over to the public sector. He considers the logical tangles with the consumer-is-always-right philosophy and also discusses how economists sometimes lose credibility on topics where they are experts by pushing oversimplified ideas in non-market-based settings.

I like the book a lot. Very few readers will agree with Rhoads on all points but that isn't really the point. He explains the ideas and the historical background well, and the topics cover a wide range, from why it makes sense to tax employer-provided health insurance to various ways in which arguments about externalities have been used to motivate various silly (in his opinion, and mine) government subsidies. I also enjoyed the bits of political science that Rhoads tosses in throughout (for example, his serious discussion in chapter 11 of direct referenda, choosing representatives by lot, and various other naive proposals for political reform).

During the 25 years since the publication of Rhoads's book, much has changed in the relation between economics and public policy. Most notably, economists have stepped out of the shadows. No longer mere technicians, they are now active figures in the public debate. Paul Volcker, Alan Greenspan, and to a lesser extent Lawrence Summers have become celebrities in a way that has been rare among government economic officials. (Yes, Galbraith and Friedman were famous in an earlier era but as writers on economics. They were not actually pulling the levers of power at the time that they were economic celebrities.) And microeconomics, characterized by Rhoads as the ugly duckling of the field, has come into its own with Freakonomics and the rest.

Up until the financial crash of 2008--and even now, still--economists have been riding high. And they'd like to ride higher. For example, a few years ago economist Matthew Kahn asked why there aren't more economists in higher office--and I suspect many other prominent economists have thought the same thing. I looked up the numbers of economists in the employed population, and it turned out that they were in fact overrepresented in Congress. This is not to debate the merits of Kahn's argument--perhaps Congress would indeed be better if it included more economists--but rather to note that economists have moved from being a group with backroom influence to wanting more overt power.

So, with this as background, Rhoads's book is needed now more than ever. It's important for readers of all political persuasions to understand the power and generality of the economist's view. Rhoads's son Chris recently informed me that his father is at work on a second edition, so I pulled my well-worn copy of the first edition off the shelf. I hope the comments below will be useful during the preparation of the revision.

What follows is not intended as any sort of a review; it is merely a transcription and elaboration of the post-it notes that I put in, fifteen years ago, noting issues that I had. (In case you're wondering: yes, the notes are still sticky.)

- On page 102, Rhoads explains why economists think that price controls and minimum wage laws are bad for low-income Americans: "It is striking that there is almost no support for any of these price control measures even among the most equity-conscious economists. . . . The real issue is, in large measure, ignorance." This could be, but I'd also guess (although I haven't had a chance to check the numbers) that price controls and minimum wage are more popular among low-income than high-income voters. This does not exactly contradict Rhoads's claim--after all, poorer people might well be less well informed about economic principals--but it makes me wonder. The political scientist in me suspects that a policy that is supported by poorer people and opposed by richer people might well be a net benefit to people on the lower end of the economic distribution. Rhoads points out that there are more economically efficient forms of transfer--for example, direct cash payments to the poor--but that's not so relevant if such policies aren't about to be implemented because of political resistance.

Later on, Rhoads approvingly quotes an economist who writes, "Rent controls destroy incentives to maintain or rehabilitate property, and are thus an assured way to preserve slums." This may have sounded reasonable when it was written in 1970 but seems naive from a modern-day perspective. Sure, you want a good physical infrastructure, you don't want the pipes to break, etc., but what really makes a neighborhood a slum is crime. Rent control can give people a stake in their location (as with mortgage tax breaks, through the economic inefficiency of creating an incentive to not move). There might be better policies to encourage stability--or maybe increased turnover in dwellings is actually preferable--but the path from "incentives to maintain or rehabilitate property" and "slums" is far from clear.

- On page 139, Rhoads writes: "Most of the costs of business safety regulation fall on consumers." Again, this might be correct, but my impression is that the strongest opposition to these regulations come from business operators, not from consumers. Much of this opposition perhaps arises from costs that are not easily measured in dollars: for example, filling out endless forms, worrying about rules and deadlines. This sort of paperwork load is a constant cost that is borne by managers, not consumers. Anyway, my point is the same as above: as a political scientist, I'm skeptical of the argument that consumers bear most of the costs, given that business operators are (I think) the ones who really oppose these regulations. I'm not arguing that any particular regulation is a good idea, just saying that seems naive to me to take economists' somewhat ideologically-loaded claims at face value here.

- On page 217, Rhoads quotes an economics journalist who writes, "Through its tax laws, government can help create a climate for risk-taking. It ought to prey on the greed in human nature and the industriousness in the American character. Otherwise, stand aside." I have a few thoughts on these lines which perhaps sound a bit different now than in 1980 when they first appeared. Most obviously, a natural consequence of greed + industriousness is . . . theft. There's an even larger problem with this attitude, though, even setting aside moral hazard (those asymmetrical bets in which the banker gets rich if he wins but the taxpayer covers any loss). Even in a no-free-lunch environment in which risks are truly risky, why is "a climate for risk-taking" supposed to be a good thing? This seems a leap beyond the principles of economic efficiency that came in the previous chapters, and I have some further thoughts about this below.

- On page 20, Rhoads criticizes extreme safety laws and writes, "There would be nothing admirable about a society that watched the quality of its life steadily decline in hot pursuit of smaller and smaller increments of life extension." He was ahead of his time in considering this issue. Nowadays with health care costs crowding out everything else, we're all aware of this tradeoff as expressed, for example, in these graphs showing the U.S. spending twice as much on health as other countries with no benefit in life expectancy. It turned out, though, that the culprit was not safety laws but rather the tangled mixture of public and private care that we have in this country. This example suggests that the economist's view of the world can be a valuable perspective without always offering a clear direction for improvement.

Another example from Rhodes's book is nuclear power plants. Some economists argue on free-market grounds that the civilian nuclear industry should be left to fend for themselves without further government support while others argue on efficiency grounds that nuclear power is safe and clean and should be subsidized (see p. 230). Ultimately I agree with Rhoads that this comes down to costs and benefits (and I definitely think like an economist in that way) but in the meantime there is a clash of the two fundamental principles of free markets on one side and efficiency on the other. (The economists who support nuclear power on efficiency grounds cannot simply rely on the free market because of existing market-distorting factors such as safety regulations, fossil fuel subsidies, and various complexities in the existing energy supply system.)

- Finally, when economists talk about fundamental principles, they often bring in their value judgments for free. For example, on page 168 Rhoads quotes an economics writer who doubts that "we need the government to subsidize high-brow entertainment--theater, ballet, opera and television drama . . . Let people decide for themselves whether they want to be entertained by the Pittsburgh Steelers or the local symphony." Well, sure, we definitely don't need subsidies for any of these things. The question is not of need but rather of discretionary spending, given that money is indeed being disbursed as part of the political process. But what I really wonder is: what does this guy (not Rhoads, but the writer he quotes) have against the local symphony? The Pittsburgh Steelers are already subsidized! (Everybody knows this. I just did a quick search on "pittsburgh steelers subsidy" and came across this blog by Skip Sauer with this line: "Three Rivers Stadium in Pittsburgh still was carrying $45 million in debt at the time of its demolition in 2001.")

I hope that in his revision, Rhoads will elaborate on the dominant perspectives of different social science fields. Crudely speaking, political scientists speak to princes, economists speak to business owners, and sociologists speak to community organizers. If we're not careful, we political scientists can drift into a "What should the government do?" attitude which presupposes that the government's goals are reasonable. Similarly, economists have their own cultural biases, such as preferring football to the symphony and, more importantly, viewing risk taking as a positive value in and of itself.

In summary, I think The Economist's View of the World is a great book and I look forward to the forthcoming second edition. I think it's extremely important to see the economist's perspective with its strengths and limitations in a single place.

I followed a link from Tyler Cowen to this bit by Daniel Kahneman:

Education is an important determinant of income -- one of the most important -- but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10%. When you focus on education you neglect the myriad other factors that determine income. The differences of income among people who have the same education are huge.

I think I know what he's saying--if you regress income on education and other factors, and then you take education out of the model, R-squared decreases by 10%. Or something like that. Not necessarily R-squared, maybe you fit the big model, then get predictions for everyone putting in the mean value for education and look at the sd of incomes or the Gini index or whatever. Or something else along those lines.

My problem is with the counterfactual: "If everyone had the same education . . ." I have a couple problems with this one. First, if everyone had the same education, we'd have a much different world and I don't see why the regressions on which he's relying would still be valid. Second, is it even possible for everyone to have the same education? I majored in physics at MIT. I don't think it's possible for everyone to do this. Setting aside budgetary constraints, I don't think that most college-age kids could handle the MIT physics curriculum (nor do I think I could handle, for example, the courses at a top-ranked music or art college). I suppose you could imagine everyone having the same number of years of education, but that seems like a different thing entirely.

As noted, I think I see what Kahneman is getting at--income is determined by lots of other factors than education--but I'm a bit disappointed that he could be so casual with the causality. And without the causal punch, his statement doesn't seem so impressive to me. Everybody knows that education doesn't determine income, right? Bill Gates never completed college, and everybody knows the story of humanities graduates who can't find a job.

I've heard from various sources that when you give a talk in an econ dept that they eat you alive: typically the audience showers you with questions and you are lucky to get past the second slide in your presentation. So far, though, I've given seminar talks in three economics departments--George Mason University a few years ago, Sciences Po last year, and Hunter College yesterday--and all three times the audiences have been completely normal. They did not interrupt unduly and they asked a bunch of good questions at the end. n=3, sure. But still.

A friend asks the above question and writes:

This article left me thinking - how could the IRS not notice that this guy didn't file taxes for several years? Don't they run checks and notice if you miss a year? If I write a check our of order, there's an asterisk next to the check number in my next bank statement showing that there was a gap in the sequence.

If you ran the IRS, wouldn't you do this: SSNs are issued sequentially. Once a SSN reaches 18, expect it to file a return. If it doesn't, mail out a postage paid letter asking why not with check boxes such as Student, Unemployed, etc. Follow up at reasonable intervals. Eventually every SSN should be filing a return, or have an international address. Yes this is intrusive, but my goal is only to maximize tax revenue. Surely people who do this for a living could come up with something more elegant.

My response:

I dunno, maybe some confidentiality rules? The other thing is that I'm guessing that IRS gets lots of pushback when they hassle rich and influential people. I'm sure it's much less effort for them to go after the little guy, which is less cost effective. And behind this is a lack of societal consensus that the IRS are good guys. They're enforcing a law that something like a third of the people oppose! But I agree: given that we need taxes, I think we should go after the cheats.

Perhaps some informed readers out there can supply more context.

Details here.

Catherine Rampell highlights this stunning Gallup Poll result:

6 percent of Americans in households earning over $250,000 a year think their taxes are "too low." Of that same group, 26 percent said their taxes were "about right," and a whopping 67 percent said their taxes were "too high."

OK, fine. Most people don't like taxes. No surprise there. But get this next part:

And yet when this same group of high earners was asked whether "upper-income people" paid their fair share in taxes, 30 percent said "upper-income people" paid too little, 30 percent said it was a "fair share," and 38 percent said it was too much.

30 percent of these upper-income people say that upper-income people pay too little, but only 6 percent say that they personally pay too little. 38% say that upper-income people pay too much, but 67% say they personally pay too much.

The following is an essay into a topic I know next to nothing about.

As part of our endless discussion of Dilbert and Charlie Sheen, commenter Fraac linked to a blog by philosopher Edouard Machery, who tells a fascinating story:

How do we think about the intentional nature of actions? And how do people with an impaired mindreading capacity think about it?

Consider the following probes:

The Free-Cup Case

Joe was feeling quite dehydrated, so he stopped by the local smoothie shop to buy the largest sized drink available. Before ordering, the cashier told him that if he bought a Mega-Sized Smoothie he would get it in a special commemorative cup. Joe replied, 'I don't care about a commemorative cup, I just want the biggest smoothie you have.' Sure enough, Joe received the Mega-Sized Smoothie in a commemorative cup. Did Joe intentionally obtain the commemorative cup?

The Extra-Dollar Case

Joe was feeling quite dehydrated, so he stopped by the local smoothie shop to buy the largest sized drink available. Before ordering, the cashier told him that the Mega-Sized Smoothies were now one dollar more than they used to be. Joe replied, 'I don't care if I have to pay one dollar more, I just want the biggest smoothie you have.' Sure enough, Joe received the Mega-Sized Smoothie and paid one dollar more for it. Did Joe intentionally pay one dollar more?

You surely think that paying an extra dollar was intentional, while getting the commemorative cup was not. [Indeed, I do--AG.] So do most people (Machery, 2008).

But Tiziana Zalla and I [Machery] have found that if you had Asperger Syndrome, a mild form of autism, your judgments would be very different: You would judge that paying an extra-dollar was not intentional, just like getting the commemorative cup.

I'm not particularly interested in the Asperger's angle (except for the linguistic oddity that most people call it Asperger's but in the medical world it's called Asperger; compare, for example, the headline of the linked blog to its text), but I am fascinated by the above experiment. Even after reading the description, it seems to me perfectly natural to think of the free cup as unintentional and the extra dollar as intentional. But I also agree with the implicit point that, in a deeper sense, the choice to pay the extra dollar isn't really more intentional than the choice to take the cup. It just feels that way.

To engage in a bit of introspective reasoning (as is traditional in in the "heuristics and biases" field), I'd say the free cup just happened whereas in the second scenario Joe had to decide to pay the dollar.

But that's not really it. The passive/active division correctly demarcates the free cup and extra dollar examples, but Machery presents other examples where both scenarios are passive, or where both scenarios are active, and you can get perceived intentionality or lack of intentionality in either case. (Just as we learned from classical decision theory and the First Law of Robotics, to not decide is itself a decision.)

Machery's explanation (which I don't buy)

Leslie McCall spoke in the sociology department here the other day to discuss changes in attitudes about income inequality as well as changes in attitudes about attitudes about income inequality. (That is, she talked about what survey respondents say, and she talked about what scholars have said about what survey respondents say.)

On the plus side, the talk was interesting. On the downside, I had to leave right at the start of the discussion so I didn't have a chance to ask my questions. So I'm placing them below.

I can't find a copy of McCall's slides so I'll link to this recent op-ed she wrote on the topic of "Rising Wealth Inequality: Should We Care?" Her title was "Americans Aren't Naive," and she wrote:

Understanding what Americans think about rising income inequality has been hampered by three problems.

First, polls rarely ask specifically about income inequality. They ask instead about government redistributive polices, such as taxes and welfare, which are not always popular. From this information, we erroneously assume that Americans don't care about inequality. . .. Second, surveys on inequality that do exist are not well known. . . . Third . . . politicians and the media do not consistently engage Americans on the issue. . . .

It is often said that Americans care about opportunity and not inequality, but this is very misleading. Inequality can itself distort incentives and restrict opportunities. This is the lesson that episodes like the financial crisis and Great Recession convey to most Americans.

What follows is not any attempt at an exposition, appreciation, or critique of McCall's work but rather just some thoughts that arose, based on some notes I scrawled during her lecture:

1. McCall is looking at perceptions of perceptions. This reminds me of our discussions in Red State Blue State about polarization and the perception of polarization. The idea is that, even if American voters are not increasingly polarized in their attitudes, there is a perception of polarization, and this perception can itself have consequences (for example, in the support offered to politicians on either side who refuse to compromise).

2. McCall talked about meritocracy and shared a quote from Daniel Bell (who she described as "conservative," which surprised me, but I guess it would be accurate to call him the most liberal of the neoconservatives) about how meritocracy could be good or bad, with bad meritocracy associated with meritocrats who abuse their positions of power and degrade those below in the social ladder.

At this point I wanted to jump up and shout James "the Effect" Flynn's point that meritocracy is a self-contradiction. As Flynn put it:

The case against meritocracy can be put psychologically: (a) The abolition of materialist-elitist values is a prerequisite for the abolition of inequality and privilege; (b) the persistence of materialist-elitist values is a prerequisite for class stratification based on wealth and status; (c) therefore, a class-stratified meritocracy is impossible.

Flynn also points out that the promotion and celebration of the concept of "meritocracy" is also, by the way, a promotion and celebration of wealth and status--these are the goodies that the people with more merit get:

People must care about that hierarchy for it to be socially significant or even for it to exist. . . . The case against meritocracy can also be put sociologically: (a) Allocating rewards irrespective of merit is a prerequisite for meritocracy, otherwise environments cannot be equalized; (b) allocating rewards according to merit is a prerequisite for meritocracy, otherwise people cannot be stratified by wealth and status; (c) therefore, a class-stratified meritocracy is impossible.

In short, when people talk about meritocracy they tend to focus on the "merit" part (Does Kobe Bryant have as much merit as 10,000 schoolteachers? Do doctors have more merit than nurses? Etc.), but the real problem with meritocracy is that it's an "ocracy."

This point is not in any way a contradiction or refutation of McCall. I just think that, to the extent that debates over "just deserts" are a key part of her story, it would be useful to connect to Flynn's reflections on the impossibility of a meritocratic future.

3. I have a few thoughts on the competing concepts of opportunity vs. redistribution, which were central to McCall's framing.

a. Loss aversion. Opportunity sounds good because it's about gains. In contrast, I suspect that, when we think about redistribution, losses are more salient. (Redistribution is typically framed as taking from group A and giving to group B. There is a vague image of a bag full of money, and of course you have to take it from A before giving it to B.) So to the extent there is loss aversion (and I think there is), redistribution is always gonna be a tough sell.

b. The path from goal to policy. If you're going to cut taxes, what services do you plan to cut? If you plan to increase services, who's going to pay for it? Again, economic opportunity sounds great because you're not taking it from anybody. This is not just an issue of question wording in a survey; I think it's fundamental to how people think about inequality and redistribution.

I suspect the cognitive (point "a" above) and political (point "b") framing are central to people's struggles in thinking about economic opportunity. The clearest example is affirmative action, where opportunity for one group directly subtracts from opportunity for others.

4. As I remarked during McCall's talk, I was stunned that more than half the people did not think that family or ethnicity helped people move up in the world. We discussed the case of George W. Bush, who certainly benefited from family connections but can't really said to have moved up in the world--for him, being elected president was just a way to stand still, intergenerationally-speaking. As well as being potentially an interesting example for McCall's book-in-progress, the story of G. W. Bush illustrates some of the inherent contradictions in thinking about mobility in a relative sense. Not everyone can move up, at least not in a relative sense.

5. McCall talked about survey results on Americans' views of rich people and, I think, of corporate executives. This reminds me of survey data from 2007 on Americans' views of corporations:

Nearly two-thirds of respondents say corporate profits are too high, but, according to a Pew research report, "more than seven in ten agree that 'the strength of this country today is mostly based on the success of American business' - an opinion that has changed very little over the past 20 years." People like business in general (except for those pesky corporate profits) but they love individual businesses, with 95% having a favorable view of Johnson and Johnson (among those willing to give a rating), 94% liking Google, 91% liking Microsoft, . . . I was surprised to find that 70% of the people were willing to rate Citibank, and of those people, 78% had a positive view. I don't have a view of Citibank one way or another, but it would seem to me to be the kind of company that people wouldn't like, even in 2007. Were banks ever popular? I guess so.

The Pew report broke things down by party identification (Democrat or Republican) and by "those who describe their household as professional or business class; those who call themselves working class; and those who say their family or household is struggling."

Republicans tend to like corporations, with little difference between the views of professional-class and working-class Republicans. For Democrats, though, there's a big gap, with professionals having a generally more negative view, compared to the working class. Follow the link for some numbers and some further discussion for some fascinating patterns that I can't easily explain.

6. In current debates over the federal budget, liberals favor an economic stimulus (i.e., deficit spending) right now, while conservatives argue that, not only should we decrease the deficit, but that our entire fiscal structure is unsustainable, that we can't afford the generous pensions and health care that's been promised to everyone. The crisis in the euro is often taken by fiscal conservatives as a signal that the modern welfare state is a pyramid scheme, and something has to get cut.

When the discussion shifts to the standard of living of the middle class, though, we get a complete reversal. McCall's op-ed was part of an online symposium on wealth inequality. One thing that struck me about the discussions there was the reversal of the usual liberal/conservative perspectives on fiscal issues.

Liberals who are fine with deficits at the national level argue that, in the words of Michael Norton, "the expansion of consumer credit in the United States has allowed middle class and poor Americans to live beyond their means, masking their lack of wealth by increasing their debt." From the other direction, conservatives argue that Americans are doing just fine, with Scott Winship reporting that "four in five Americans have exceeded the income their parents had at the same age."

From the left, we hear that America is rich but Americans are broke. From the right, the story is the opposite. America (along with Europe and Japan) are broke but individual Americans are doing fine.

I see the political logic to these positions. If you start from the (American-style) liberal perspective favoring government intervention in the economy, you'll want to argue that (a) people are broke and need the government's help, and (b) we as a society can afford it. If you start from the conservative perspective favoring minimal government intervention, you'll want to argue that (a) people are doing just fine as they are, and (b) anyway, we can't afford to help them.

I won't try to adjudicate these claims: as I've written a few dozen times in this space already, I have no expertise in macroeconomics (although I did get an A in the one and only econ class I ever took, which was in 11th grade). I bring them up in order to demonstrate the complicated patterns between economic ideology, political ideology, and views about inequality.

Happy tax day!


Your taxes pay for the research funding that supports the work we do here, some of which appears on this blog and almost all of which is public, free, and open-source. So, to all of the taxpayers out there in the audience: thank you.

A few years ago Larry Bartels presented this graph, a version of which latter appeared in his book Unequal Democracy:


Larry looked at the data in a number of ways, and the evidence seemed convincing that, at least in the short term, the Democrats were better than Republicans for the economy. This is consistent with Democrats' general policies of lowering unemployment, as compared to Republicans lowering inflation, and, by comparing first-term to second-term presidents, he found that the result couldn't simply be explained as a rebound or alternation pattern.

The question then arose, why have the Republicans won so many elections? Why aren't the Democrats consistently dominating? Non-economic issues are part of the story, of course, but lots of evidence shows the economy to be a key concern for voters, so it's still hard to see how, with a pattern such as shown above, the Republicans could keep winning.

Larry had some explanations, largely having to do with timing: under Democratic presidents the economy tended to improve at the beginning of the four-year term, while gains under Republicans tended to occur in years 3 and 4--just in time for the next campaign!

See here for further discussion (from five years ago) of Larry's ideas from the perspective of the history of the past 60 years.

Enter Campbell

Jim Campbell recently wrote an article, to appear this week in The Forum (the link should become active once the issue is officially published) claiming that Bartels is all wrong--or, more precisely, that Bartels's finding of systematic differences in performance between Democratic and Republican presidents is not robust and goes away when you control the economic performance leading in to a president's term.

Here's Campbell:

Previous estimates did not properly take into account the lagged effects of the economy. Once lagged economic effects are taken into account, party differences in economic performance are shown to be the effects of economic conditions inherited from the previous president and not the consequence of real policy differences. Specifically, the economy was in recession when Republican presidents became responsible for the economy in each of the four post-1948 transitions from Democratic to Republican presidents. This was not the case for the transitions from Republicans to Democrats. When economic conditions leading into a year are taken into account, there are no presidential party differences with respect to growth, unemployment, or income inequality.

For example, using the quarterly change in GDP measure, the economy was in free fall in Fall 2008 but in recovery during the third and fourth quarters of 2009, so this counts as Obama coming in with a strong economy. (Campbell emphasizes that he is following the lead of Bartels and counting a president's effect on the economy to not begin until year 2.)

It's tricky. Bartels's claims are not robust to changes in specifications, but Campbell's conclusions aren't completely stable either. Campbell finds one thing if he controls for previous year's GNP growth but something else if he controls only for GNP growth in the 3rd and 4th quarter of the previous year. This is not to say Campbell is wrong but just to say that any atheoretical attempt to throw in lags can result in difficulty in interpretation.

I'm curious what Doug Hibbs thinks about all this; I don't know why, but to me Hibbs exudes an air of authority on this topic, and I'd be inclined to take his thoughts on these matters seriously.

What struck me the most about Campbell's paper was ultimately how consistent its findings are with Bartels's claims. This perhaps shouldn't be a surprise, given that they're working with the same data, but it did surprise me because their political conclusions are so different.

Here's the quick summary, which (I think) both Bartels and Campbell would agree with:

- On average, the economy did a lot better under Democratic than Republican presidents in the first two years of the term.

- On average, the economy did slightly better under Republican than Democratic presidents in years 3 and 4.

These two facts are consistent with the Hibbs/Bartels story (Democrats tend to start off by expanding the economy and pay the price later, while Republicans are more likely to start off with some fiscal or monetary discipline) and also consistent with Campbell's story (Democratic presidents tend to come into office when the economy is doing OK, and Republicans are typically only elected when there are problems).

But the two stories have different implications regarding the finding of Hibbs, Rosenstone, and others that economic performance in the last years of a presidential term predicts election outcomes. Under the Bartels story, voters are myopically chasing short-term trends, whereas in Campbell's version, voters are correctly picking up on the second derivative (that is, the trend in the change of the GNP from beginning to end of the term).

Consider everyone's favorite example: Reagan's first term, when the economy collapsed and then boomed. The voters (including Larry Bartels!) returned Reagan by a landslide in 1984: were they suckers for following a short-term trend or were they savvy judges of the second derivative?

I don't have any handy summary here--I don't see a way to declare a winner in the debate--but I wanted to summarize what seem to me to be the key points of agreement and disagreement in these very different perspectives on the same data.

One way to get leverage on this would be to study elections for governor and state economies. Lots of complications there, but maybe enough data to distinguish between the reacting-to-recent-trends and reacting-to-the-second-derivative stories.

P.S. See below for comments by Campbell.

Some thoughts on the implausibility of Paul Ryan's 2.8% unemployment forecast. Some general issues arise.

P.S. Yes, Democrats also have been known to promote optimistic forecasts!

After noting the increasing political conservatism of people in the poorer states, Richard Florida writes:

The current economic crisis only appears to have deepened conservatism's hold on America's states. This trend stands in sharp contrast to the Great Depression, when America embraced FDR and the New Deal.

Liberalism, which is stronger in richer, better-educated, more-diverse, and, especially, more prosperous places, is shrinking across the board and has fallen behind conservatism even in its biggest strongholds. This obviously poses big challenges for liberals, the Obama administration, and the Democratic Party moving forward.

But the much bigger, long-term danger is economic rather than political. This ideological state of affairs advantages the policy preferences of poorer, less innovative states over wealthier, more innovative, and productive ones. American politics is increasingly disconnected from its economic engine. And this deepening political divide has become perhaps the biggest bottleneck on the road to long-run prosperity.

What are my thoughts on this?

First, I think Obama would be a lot more popular had he been elected in 1932, rather than 1930.

Second, transfers from the richer, more economically successful states to the poorer, less developed states are not new. See, for example, this map from 1924 titled "Good Roads Everywhere" that shows a proposed system of highways spanning the country, "to be built and forever maintained by the United States Government."


The map, made by the National Highways Association, also includes the following explanation for the proposed funding system: "Such a system of National Highways will be paid for out of general taxation. The 9 rich densely populated northeastern States will pay over 50 per cent of the cost. They can afford to, as they will gain the most. Over 40 per cent will be paid for by the great wealthy cities of the Nation. . . . The farming regions of the West, Mississippi Valley, Southwest and South will pay less than 10 per cent of the cost and get 90 per cent of the mileage." [emphasis added] Beyond its quaint slogans ("A paved United States in our day") and ideas that time has passed by ("Highway airports"), the map gives a sense of the potential for federal taxing and spending to transfer money between states and regions.

Reviewing a research article by Michael Spence and Sandile Hlatshwayo about globalization (a paper with the sobering message that "higher-paying jobs [are] likely to follow low-paying jobs in leaving US," Tyler Cowen writes:

It is also a useful corrective to the political conspiracy theories of changes in the income distribution. . .

Being not-so-blissfully ignorant of macroeconomics, I can focus on the political question, namely these conspiracy theories.

I'm not quite sure what Cowen is referring to here--he neglects to provide a link to the conspiracy theories--but I'm guessing he's referring to the famous graph by Piketty and Saez showing how very high-end incomes (top 1% or 0.1%) have, since the 1970s, risen much more dramatically in the U.S. than in other countries, along with claims by Paul Krugman and others that much of this difference can be explained by political changes in the U.S. In particular, top tax rates in the U.S. have declined since the 1970s and the power of labor unions have decreased. The argument that Krugman and others make on the tax rates is both direct (the government takes away money from people with high incomes) and indirect (with higher tax rates, there is less incentive to seek or to pay high levels of compensation). And there's an even more indirect argument that as the rich get richer, they can use their money in various ways to get more political influence.

Anyway, I'm not sure what the conspiracy is. I mean, whatever Grover Norquist might be doing in a back room somewhere, the move to lower taxes was pretty open. According to Dictionary,com, a conspiracy is "an evil, unlawful, treacherous, or surreptitious plan formulated in secret by two or more persons; plot."

Hmm . . . I suppose Krugman etc. might in fact argue that there has been some conspiracy going on--for example of employers conspiring to use various illegal means to thwart union drives--but I'd also guess that to him and others on the left or center-left, most of the political drivers of inequality changes have been open, not conspiratorial.

I might be missing something here, though; I'd be interested in hearing more. At this point I'm not sure if Cowen's saying that these conspiracies don't exist, or whether they exist (and are possibly accompanied by similar conspiracies on the other side) but have been ineffective. Also I might be completely wrong in assigning Cowen's allusion to Krugman etc.

This discussion is relevant to this here blog because the labeling of a hypothesis as a "conspiracy" seems relevant to how it is understood and evaluated.

Ed Glaeser writes:

The late Senator Daniel Patrick Moynihan of New York is often credited with saying that the way to create a great city is to "create a great university and wait 200 years," and the body of evidence on the role that universities play in generating urban growth continues to grow.

I've always thought this too, that it's too bad that, given the total cost, a lot more cities would've benefited, over the years, by maintaining great universities rather than building expensive freeways, RenCens, and so forth.

But Joseph Delaney argues the opposite, considering the case of New Haven, home of what is arguably the second-best university in the country (I assume Glaeser would agree with me on this one):

Coauthorship norms


I followed this link from Chris Blattman to an article by economist Roland Fryer, who writes:

I [Fryer] find no evidence that teacher incentives increase student performance, attendance, or graduation, nor do I find any evidence that the incentives change student or teacher behavior.

What struck me were not the findings (which, as Fryer notes in his article, are plausible enough) but the use of the word "I" rather than "we." A field experiment is a big deal, and I was surprised to read that Fryer did it all by himself!

Here's the note of acknowledgments (on the first page of the article):

This project would not have been possible without the leadership and support of Joel Klein. I am also grateful to Jennifer Bell-Ellwanger, Joanna Cannon, and Dominique West for their cooperation in collecting the data necessary for this project, and to my colleagues Edward Glaeser, Richard Holden, and Lawrence Katz for helpful comments and discussions. Vilsa E. Curto, Meghan L. Howard, Won Hee Park, Jörg Spenkuch, David Toniatti, Rucha Vankudre, and Martha Woerner provided excellent research assistance.

Joel Klein was the schools chancellor so I assume he wasn't deeply involved in the study; his role was presumably to give it his OK. I'm surprised that none of the other people ended up as coauthors on the paper. But I guess it makes sense: My colleagues and I will write a paper based on survey data without involving the data collectors as coauthors, so why not do this with experimental data too? I guess I just find field experiments so intimidating that I can't imagine writing an applied paper on the topic without a lot of serious collaboration. (And, yes, I feel bad that it was only my name on the cover of Red State, Blue State, given that the book had five authors.) Perhaps the implicit rules about coauthorship are different in economics than in political science.

P.S. I was confused by one other thing in Fryer's article. On page 1, it says:

Despite these reforms to increase achievement, Figure 1 demonstrates that test scores have been largely constant over the past thirty years.

Here's Figure 1:


Once you get around the confusingly-labeled lines and the mass of white space on the top and bottom of each graph, you see that math scores have improved a lot! Since 1978, fourth-grade math scores have gone up so much that they're halfway to where eighth grade scores were in 1978. Eighth grade scores also have increased substantially, and twelfth-grade scores have gone up too (although not by as much). Nothing much has happened with reading scores, though. Perhaps Fryer just forgot to add the word "reading" in the sentence above. Or maybe something else is going on in Figure 1 that I missed. I only wish that he'd presented the rest of his results graphically. Even a sloppy graph is a lot easier for me to follow than a table full of numbers presented to three decimal places. I know Fryer can do better; his previous papers had excellent graphs (see here and here).

Vishnu Ganglani writes:

It appears that multiple imputation appears to be the best way to impute missing data because of the more accurate quantification of variance. However, when imputing missing data for income values in national household surveys, would you recommend it would be practical to maintain the multiple datasets associated with multiple imputations, or a single imputation method would suffice. I have worked on household survey projects (in Scotland) and in the past gone with suggesting single methods for ease of implementation, but with the availability of open source R software I am think of performing multiple imputation methodologies, but a bit apprehensive because of the complexity and also the need to maintain multiple datasets (ease of implementation).

My reply: In many applications I've just used a single random imputation to avoid the awkwardness of working with multiple datasets. But if there's any concern, I'd recommend doing parallel analyses on multiple imputed datasets and then combining inferences at the end.

Rajiv Sethi has some very interesting things to say:

As the election season draws closer, considerable attention will be paid to prices in prediction markets such as Intrade. Contracts for potential presidential nominees are already being scrutinized for early signs of candidate strength. . . .

This interpretation of prices as probabilities is common and will be repeated frequently over the coming months. But what could the "perceived likelihood according to the market" possibly mean?

Prediction market prices contain valuable information about this distribution of beliefs, but there is no basis for the common presumption that the price at last trade represents the beliefs of a hypothetical average trader in any meaningful sense [emphasis added]. In fact, to make full use of market data to make inferences about the distribution of beliefs, one needs to look beyond the price at last trade and examine the entire order book.

Sethi looks at some of the transaction data and continues:

What, then, can one say about the distribution of beliefs in the market? To begin with, there is considerable disagreement about the outcome. Second, this disagreement itself is public information: it persists despite the fact that it is commonly known to exist. . . . the fact of disagreement is not itself considered to be informative, and does not lead to further belief revision. The most likely explanation for this is that traders harbor doubts about the rationality or objectivity of other market participants. . . .

More generally, it is entirely possible that beliefs are distributed in a manner that is highly skewed around the price at last trade. That is, it could be the case that most traders (or the most confident traders) all fall on one side of the order book. In this case the arrival of seemingly minor pieces of information can cause a large swing in the market price.

Sethi's conclusion:

There is no meaningful sense in which one can interpret the price at last trade as an average or representative belief among the trading population.

This relates to a few points that have come up here on occasion:

1. We're often in the difficult position of trying to make inferences about marginal (in the economic sense) quantities from aggregate information.

2. Markets are impressive mechanisms for information aggregation but they're not magic. The information has to come from somewhere, and markets are inherently always living in the phase transition between stability and instability. (It is the stability that makes prices informative and the instability that allows the market to be liquid.)

3. If the stakes in a prediction market are too low, participants have the incentive and ability to manipulate it; if the stakes are too high, you have to worry about point-shaving.

This is not to say that prediction markets are useless, just that they are worth studying seriously in their own right, not to be treated as oracles. By actually looking at and analyzing some data, Sethi goes far beyond my sketchy thoughts in this area.

Timothy Noah reports:

At the end of 2007, Harvard announced that it would limit tuition to no more than 10 percent of family income for families earning up to $180,000. (It also eliminated all loans, following a trail blazed by Princeton, and stopped including home equity in its calculations of family wealth.) Yale saw and raised to $200,000, and other wealthy colleges weighed in with variations.

Noah argues that this is a bad thing because it encourages other colleges to give tuition breaks to families with six-figure incomes, thus sucking up money that could otherwise go to reduce tuition for lower-income students. For example:

Roger Lehecka, a former dean of students at Columbia, and Andrew Delbanco, director of American studies there, wrote in the New York Times that Harvard's initiative was "good news for students at Harvard or Yale" but "bad news" for everyone else. "The problem," they explained, "is that most colleges will feel compelled to follow Harvard and Yale's lead in price-discounting. Yet few have enough money to give more aid to relatively wealthy students without taking it away from relatively poor ones."

I don't follow the reasoning here. Noah also writes that Harvard received "35,000 applications for fewer than 1,700 slots," so I don't see why these other schools have to match Harvard at all. Why not just compete for the 33,300 kids who get rejected from Harvard (not to mention those who don't apply to the big H at all)? Sure, there's Yale too, but still, there's something about this story that's bothering me.

Ultimately, this doesn't seem like it's about income at all. I mean, suppose Harvard, Yale, etc., took the big steps of zeroing out their tuitions entirely, so that even Henry Henhouse III could send little Henry IV to Harvard without paying a cent (ok, maybe something for room and board, but really that could be free too, if Harvard wanted to do it that way). Now maybe this wouldn't be a good move for the university--I'm sure the money would be more effectively spent as a salary increase for the statistics and political science faculty--but let's not worry about the details. The point is, if Harvard and Yale became free, Noah's argument would continue to hold. But is it really right to criticize a rich institution for giving things out for free? I'm planning to publish my intro statistics book for free. Does this mean I'm a bad guy because I'm depriving Cambridge University Press the free money that they can use to subsidize worthy but unprofitable books on classical studies? I don't think so.)

To put it another way, it seems pretty weird to me to say that Harvard has an obligation to keep its tuition high, just to give other colleges a break. If Harvard and Yale want to cut tuition costs, or if MIT wants to stream lectures online for free, that's good, no?


But I think I'm missing something. At the end of his essay, Noah says he wants college costs to decrease ("surely the answer is to curb the inflation of this commodity's price"), which seems to contradict his earlier complaints about Harvard and Yale's tuition-cutting. I'd be interested in hearing from him (and from Lehecka and Delbanco at Columbia) what their ideal Harvard and Yale tuition plans would be. These institutions already charge very little for kids from low-income families, so if you want to cut the cost of tuition, but not to offer discounts for the upper middle class, then what exactly are they recommending? It's hard for me to imagine they want Harvard to cut tuition for rich kids, but that seems like the only option left. I'm confused.

This was just bizarre. It's an interview with Colin Camerer, a professor of finance and economics at Caltech, The topic is Camerer's marriage, but what's weird is that he doesn't say anything specific about his wife at all. All we get are witticisms of the sub-Henny-Youngman level, for example, in response to the question, "Any free riding in your household?", Camerer says:

No. Here's why: I am one of the world's leading experts on psychology, the brain and strategic game theory. But my wife is a woman. So it's a tie.

Also some schoolyard evolutionary biology ("men signaling that they will help you raise your baby after conception, and women signaling fidelity" blah blah blah) and advice for husbands in "upper-class marriages with assets." (No advice to the wives, but maybe that's a good thing.) And here are his insights on love and marriage:

Marriage is like hot slow-burning embers compared to the flashy flames of love. After the babies, the married brain has better things to do--micromanage, focus on those babies, create comfort zones. Marriage love can then burrow deeper, to the marrow.

To the marrow, eh? And what about couples who have no kids? Then maybe you're burrowing through the skin and just to the surface of the bone, I guess.

It seems like a wasted opportunity, really: this dude could've shared insights from his research and discussed its applicability (or the limitations of its applicability) to love, a topic that everybody cares about. (In contrast, another interview in this Economists in Love series, by Daniel Hamermesh, was much more to the point.)

Yeah, sure, I'm a killjoy, the interview is just supposed to be fluff, etc. Still, what kind of message are you sending when you define yourself as "one of the world's leading experts on psychology" and define your wife as "a woman"? Yes, I realize it's supposed to be self-deprecating, but to me it comes off as self-deprecating along the lines of, "Yeah, my cat's much smarter than I am. She gets me to do whatever she wants. Whenever she cries out, I give her food."

I'm not talking about political correctness here. I'm more worried about the hidden assumptions that can sap one's research, as well as the ways in which subtle and interesting ideas in psychology can become entangled with various not-so-subtle, not-so-well-thought-out ideas on sex roles etc.

I'm being completely unfair to Camerer

I have no idea how this interview was conducted but it could well have been done over the phone in ten minutes. Basically, Camerer is a nice guy and when these reporters called him up to ask him some questions, he said, Sure, why not. And then he said whatever came to his mind. If I were interviewed without preparation and allowed to ramble, I'd say all sorts of foolish things too. So basically I'm slamming Camerer for being nice enough to answer a phone call and then having the misfortune to see has casual thoughts spread all over the web (thanks to a link from Tyler Cowen, who really should've known better). So I take it all back.

P.S. Camerer's webpage mentions that he received his Ph.D. in 1981 at the age of 22. Woudn't it be more usual to simply give your birth year (1958 or 1959, in this case)? Perhaps it's some principle of behavioral economics, that if people have to do the subtraction they'll value the answer a bit more.

John Cook links to a blog by Ben Deaton arguing that people often waste time trying to set up ideal working conditions, even though (a) your working conditions will never be ideal, and (b) the sorts of constraints and distractions that one tries to avoid, can often stimulate new ideas.

Deaton seems like my kind of guy--for one thing, he works on nonlinear finite element analysis, which is one of my longstanding interests--and in many ways his points are reasonable and commonsensical (I have little doubt, for example, that Feynman made a good choice in staying clear of the Institute for Advanced Study!), but I have a couple of points of disagreement.

1. In my experience, working conditions can make a difference. And once you accept this, it could very well make sense to put some effort into improving your work environment. I like to say that I spent twenty years reconstructing what it felt like to be in grad school. My ideal working environment has lots of people coming in and out, lots of opportunities for discussion, planned and otherwise. It's nothing like I imagine the Institute for Advanced Study (not that I've ever been there) but it makes me happy. So I think Deaton is wrong to generalize to "don't spend time trying to keep a very clean work environment" to "don't spend time trying to get a setup that works for you."

2. Also consider effects on others. I like to feel that the efforts I put into my work environment have positive spillovers on others--the people I work with, the other people they work with, etc., also as setting an example for others in the department. In contrast, people who want super-clean work conditions (the sort of thing that Deaton, rightly, is suspicious of) can impose negative externalities on others. For example, one of the faculty in my department once removed my course listings from the department webpage. I never got a straight answer on why this happened, but I assumed it was because he didn't like what he taught, and it offended his sensibilities to see these courses listed. Removing the listing had the advantage from his perspective of cleanliness (I assume) but negatively impacted potential students and others who might have been interested in our course offerings. That is an extreme case, but I think many of us have experienced work environments in which intellectual interactions are discouraged in some way. This is clear from Deaton's stories as well.

3. Deaton concludes by asking his readers, "How ideal is ideal enough for you to do something great?" I agree with his point that there are diminishing returns to optimization and that you shouldn't let difficulties with our workplace stop us from doing good work (unless, of course, you're working somewhere where your employer gets possession of everything you do). But I am wary of his implicit statement that "you" (whoever you are) can "do something great." I think we should all try to do our best, and I'm sure that almost all of us are capable of doing good work. But is everyone out there really situated in a place where he or she can "do something great"? I doubt it. Doing something "great" is a fine aspiration, but I wonder if some of this go-for-it advice can backfire for the people out there who really aren't in a position to achieve greatness.

Amy Cohen points me to this blog by Jim Manzi, who writes:

Steve Hsu has posted a series of reflections here, here, and here on the dominance of graduates of HYPS (Harvard, Yale, Princeton, and Stanford (in that order, I believe)) in various Master-of-the-Universe-type jobs at "elite law firms, consultancies, and I-banks, hedge/venture funds, startups, and technology companies." Hsu writes:

In the real world, people believe in folk notions of brainpower or IQ. ("Quick on the uptake", "Picks things up really fast", "A sponge" ...) They count on elite educational institutions to do their g-filtering for them. . . .

Most top firms only recruit at a few schools. A kid from a non-elite UG school has very little chance of finding a job at one of these places unless they first go to grad school at, e.g., HBS, HLS, or get a PhD from a top place. (By top place I don't mean "gee US News says Ohio State's Aero E program is top 5!" -- I mean, e.g., a math PhD from Berkeley or a PhD in computer science from MIT -- the traditional top dogs in academia.) . . .

I teach at U Oregon and out of curiosity I once surveyed the students at our Honors College, which has SAT-HSGPA characteristics similar to Cornell or Berkeley. Very few of the kids knew what a venture capitalist or derivatives trader was. Very few had the kinds of life and career aspirations that are *typical* of HYPS or techer kids. . . .

I have just a few comments.

1. Getting in to a top college is not the same as graduating from said college--and I assume you have to have somewhat reasonable grades (or some countervailing advantage). So, yes, the people doing the corporate hiring are using the educational institutions to do their "g-filtering," but it's not all happening at the admissions stage. Hsu quotes researcher Lauren Rivera as writing, "it was not the content of an elite education that employers valued but rather the perceived rigor of these institutions' admissions processes"--but I don't know if I believe that!

2. As Hsu points out (but maybe doesn't emphasize enough), the selection processes at these top firms don't seem to make a lot of sense even on their own terms. Here's another quote from Rivera: "his halo effect of school prestige, combined with the prevalent belief that the daily work performed within professional service firms was "not rocket science" gave evaluators confidence that the possession of an elite credential was a sufficient signal of a candidate's ability to perform the analytical capacities of the job." The reasoning seems to be: The job isn't so hard so the recruiters can hire whoever they want if such people pass a moderately stringent IQ threshold, thus they can pick the HYPS graduates who they like. It seems like a case of the lexicographic fallacy: the idea that you pick IQ based on the school and then clubbability, etc., among the subset of applicants who remain.

3. I should emphasize that academic hiring is far from optimal. We never know who's going to apply for our postdoc positions. And, when it comes to faculty hiring, I think Don Rubin put it best when he said that academic hiring committees all to often act as if they're giving out an award rather than trying to hire someone to do a job. And don't get me started on tenure review committees.

4. Regarding Hsu's last point above, I've long been glad that I went to MIT rather than Harvard, maybe not overall--I was miserable in most of college--but for my future career. Either place I would've taken hard classes and learned a lot, but one advantage of MIT was that we had no sense--no sense at all--that we could make big bucks. We had no sense of making moderately big bucks as lawyers, no sense of making big bucks working on Wall Street, and no sense of making really big bucks by starting a business. I mean, sure, we knew about lawyers (but we didn't know that a lawyer with technical skills would be a killer combination), we knew about Wall Street (but we had no idea what they did, other than shout pork belly prices across a big room), and we knew about tech startups (but we had no idea that they were anything to us beyond a source of jobs for engineers). What we were all looking for was a good solid job with cool benefits (like those companies in California that had gyms at the office). I majored in physics, which my friends who were studying engineering thought was a real head-in-the-clouds kind of thing to do, not really practical at all. We really had no sense that a physicist degree from MIT degree with good grades was a hot ticket.

And it wasn't just us, the students, who felt this way. It was the employers too. My senior year I applied to some grad schools (in physics and in statistics) and to some jobs. I got into all the grad schools and got zero job interviews. Not just zero jobs. Zero interviews. And these were not at McKinsey, Goldman Sachs, etc. (none of which I'd heard of). They were places like TRW, etc. The kind of places that were interviewing MIT physics grads (which is how I thought of applying for these jobs in the first place). And after all, what could a company like that do with a kid with perfect physics grades from MIT? Probably not enough of a conformist, eh?

This was fine for me--grad school suited me just fine. I'm just glad that big-buck$ jobs weren't on my radar screen. I think I would've been tempted by the glamour of it all. If I'd gone to college 10 or 20 years later, I might have felt that as a top MIT grad, I had the opportunity--even the obligation, in a way--to become some sort of big-money big shot. As it was, I merely thought i had the opportunity and obligation to make important contributions in science, which is a goal that I suspect works better for me (and many others like me).

P.S. Hsu says that "much of (good) social science seems like little more than documenting what is obvious to any moderately perceptive person with the relevant life experience." I think he might be making a basic error here. If you come up with a new theory, you'll want to do two things: (a) demonstrate that it predicts things you already know, and (b) use it to make new predictions. To develop, understand, and validate a theory, you have to do a lot of (a)--hence Hsu's impression--in order to be ready to do (b).

A simpler response to Hsu is that it's common for "moderately perceptive persons with the relevant life experience" to disagree with each other. In my own field of voting and elections, even someone as renowned as Michael Barone (who is more than moderately perceptive and has much more life experience than I do) can still get things embarrassingly wrong. (My reflections on "thinking like a scientist" may be relevant here.)

P.P.S. Various typos fixed.

Hipmunk < Expedia, again


This time on a NY-Cincinnati roundtrip. Hipmunk could find the individual flights but could not put them together. In contrast, Expedia got it right the first time.

See here and here for background. If anybody reading this knows David Pogue, please let him know about this. A flashy interface is fine, but ultimately what I'm looking for is a flight at the right place and the right time.

Matthew Yglesias links approvingly to the following statement by Michael Mandel:

Homeland Security accounts for roughly 90% of the increase in federal regulatory employment over the past ten years.

Roughly 90%, huh? That sounds pretty impressive. But wait a minute . . . what if total federal regulatory employment had increased a bit less. Then Homeland Security could've accounted for 105% of the increase, or 500% of the increase, or whatever. The point is the change in total employment is the sum of a bunch of pluses and minuses. It happens that, if you don't count Homeland Security, the total hasn't changed much--I'm assuming Mandel's numbers are correct here--and that could be interesting.

The "roughly 90%" figure is misleading because, when written as a percent of the total increase, it's natural to quickly envision it as a percentage that is bounded by 100%. There is a total increase in regulatory employment that the individual agencies sum to, but some margins are positive and some are negative. If the total happens to be near zero, then the individual pieces can appear to be large fractions of the total, even possibly over 100%.

I'm not saying that Mandel made any mistakes, just that, in general, ratios can be tricky when the denominator is the sum of positive and negative parts. In this particular case, the margins were large but not quite over 100%, which somehow gives the comparison more punch than it deserves, I think.

We discussed a mathematically identical case a few years ago involving the 2008 Democratic primary election campaign.

What should we call this?

There should be a name for this sort of statistical slip-up. The Fallacy of the Misplaced Denominator, perhaps? The funny thing is that the denominator has to be small (so that the numerator seems like a lot, "90%" or whatever) but not too small (because if the ratio is over 100%, the jig is up).

P.S. Mandel replies that, yes, he agrees with me in general about the problems of ratios where the denominator is a sum of positive and negative components, but that in this particular case, "all the major components of regulatory employment change are either positive or a very tiny negative." So it sounds like I was choosing a bad example to make my point!

Homework and treatment levels


Interesting discussion here by Mark Palko on the difficulty of comparing charter schools to regular schools, even if the slots in the charter schools have been assigned by lottery. Beyond the direct importance of the topic, I found the discussion interesting because I always face a challenge in my own teaching to assign the right amount of homework, given that if I assign too much, students will simply rebel and not do it.

To get back to the school-choice issue . . . Mark discussed selection effects: if a charter school is popular, it can require parents to sign a contract agreeing they will supervise their students to do lots of homework. Mark points out that there is a selection issue here, that the sort of parents who would sign that form are different from parents in general. But it seems to me there's one more twist: These charter schools are popular, right? So that would imply that there is some reservoir of parents who would like to sign the form but don't have the opportunity to do so in a regular school. So, even if the charter school is no more effective, conditional on the level of homework assigned, the spread of charter schools could increase the level of homework and thus be a good thing in general (assuming, of course, that you want your kid to do more homework). Or maybe I'm missing something here.

P.S. More here (from commenter ceolaf).

Joan Nix writes:

Your comments on this paper by Scott Carrell and James West would be most appreciated. I'm afraid the conclusions of this paper are too strong given the data set and other plausible explanations. But given where it is published, this paper is receiving and will continue to receive lots of attention. It will be used to draw deeper conclusions regarding effective teaching and experience.

Nix also links to this discussion by Jeff Ely.

I don't completely follow Ely's criticism, which seems to me to be too clever by half, but I agree with Nix that the findings in the research article don't seem to fit together very well. For example, Carrell and West estimate that the effects of instructors on performance in the follow-on class is as large as the effects on the class they're teaching. This seems hard to believe, and it seems central enough to their story that I don't know what to think about everything else in the paper.

My other thought about teaching evaluations is from my personal experience. When I feel I've taught well--that is, in semesters when it seems that students have really learned something--I tend to get good evaluations. When I don't think I've taught well, my evaluations aren't so good. And, even when I think my course has gone wonderfully, my evaluations are usually far from perfect. This has been helpful information for me.

That said, I'd prefer to have objective measures of my teaching effectiveness. Perhaps surprisingly, statisticians aren't so good about measurement and estimation when applied to their own teaching. (I think I've blogged on this on occasion.) The trouble is that measurement and evaluation take work! When we're giving advice to scientists, we're always yammering on about experimentation and measurement. But in our own professional lives, we pretty much throw all our statistical principles out the window.

P.S. What's this paper doing in the Journal of Political Economy? It has little or anything to do with politics or economics!

P.P.S. I continued to be stunned by the way in which tables of numbers are presented in social science research papers with no thought of communication with, for example, tables with interval estimate such as "(.0159, .0408)." (What were all those digits for? And what do these numbers have to do with anything at all?). If the words, sentences, and paragraphs of an article were put together in such a stylized, unthinking way, the article would be completely unreadable. Formal structures with almost no connection to communication or content . . . it would be like writing the entire research article in iambic pentameter with an a,b,c,b rhyme scheme, or somesuch. I'm not trying to pick on Carrell and West here--this sort of presentation is nearly universal in social science journals.

Andrew Gelman (Columbia University) and Eric Johnson (Columbia University) seek to hire a post-doctoral fellow to work on the application of the latest methods of multilevel data analysis, visualization and regression modeling to an important commercial problem: forecasting retail sales at the individual item level. These forecasts are used to make ordering, pricing and promotions decisions which can have significant economic impact to the retail chain such that even modest improvements in the accuracy of predictions, across a large retailer's product line, can yield substantial margin improvements.

Activities focus on the development of iterative imputation algorithms and diagnostics for missing-data imputation. Activities would include model-development, programming, and data analysis. This project is to be undertaken with, and largely funded by, a firm which provides forecasting technology and services to large retail chains, and which will provide access to a unique and rich set of proprietary data. The postdoc will be expected to spend some time working directly with this firm, but this is fundamentally a research position.

The ideal candidate will have a background in statistics, psychometrics, or economics and be interested in marketing or related topics. He or she should be able to work fluently in R and should already know about hierarchical models and Bayesian inference and computation.

The successful candidate will become part of the lively Applied Statistics Center community, which includes several postdocs (with varied backgrounds in statistics, computer science, and social science), Ph.D., M.A., and undergraduate students, and faculty at Columbia and elsewhere. We want people who love collaboration and have the imagination, drive, and technical skills to make a difference in our projects.

If you are interested in this position, please send a letter of application, a CV, some of your articles, and three letters of recommendation to the Applied Statistics Center coordinator, Caroline Peters, Review of applications will begin immediately.

Thiel update


A year or so ago I discussed the reasoning of zillionaire financier Peter Thiel, who seems to believe his own hype and, worse, seems to be able to convince reporters of his infallibility as well. Apparently he "possesses a preternatural ability to spot patterns that others miss."

More recently, Felix Salmon commented on Thiel's financial misadventures:

Peter Thiel's hedge fund, Clarium Capital, ain't doing so well. Its assets under management are down 90% from their peak, and total returns from the high point are -65%. Thiel is smart, successful, rich, well-connected, and on top of all that his calls have actually been right . . . None of that, clearly, was enough for Clarium to make money on its trades: the fund was undone by volatility and weakness in risk management.

There are a few lessons to learn here.

Firstly, just because someone is a Silicon Valley gazillionaire, or any kind of successful entrepreneur for that matter, doesn't mean they should be trusted with other people's money.

Secondly, being smart is a great way of getting in to a lot of trouble as an investor. In order to make money in the markets, you need a weird combination of arrogance and insecurity. Arrogance on its own is fatal, but it's also endemic to people in Silicon Valley who are convinced that they're rich because they're smart, and that since they're still smart, they can and will therefore get richer still. . . .

Just to be clear, I'm not saying that Thiel losing money is evidence that he's some sort of dummy. (Recall my own unsuccess as an investor.) What I am saying is, don't believe the hype.

Spam is out of control


I just took a look at the spam folder . . . 600 messages in the past hour! Seems pretty ridiculous to me.

Jas sends along this paper (with Devin Caughey), entitled Regression-Discontinuity Designs and Popular Elections: Implications of Pro-Incumbent Bias in Close U.S. House Races, and writes:

The paper shows that regression discontinuity does not work for US House elections. Close House elections are anything but random. It isn't election recounts or something like that (we collect recount data to show that it isn't). We have collected much new data to try to hunt down what is going on (e.g., campaign finance data, CQ pre-election forecasts, correct many errors in the Lee dataset). The substantive implications are interesting. We also have a section that compares in details Gelman and King versus the Lee estimand and estimator.

I had a few comments:

Chapter 1

On Sunday we were over on 125 St so I stopped by the Jamaican beef patties place but they were closed. Jesus Taco was next door so I went there instead. What a mistake! I don't know what Masanao and Yu-Sung could've been thinking. Anyway, then I had Jamaican beef patties on the brain so I went by Monday afternoon and asked for 9: 3 spicy beef, 3 mild beef (for the kids), and 3 chicken (not the jerk chicken; Bob got those the other day and they didn't impress me). I'm about to pay and then a bunch of people come in and start ordering. The woman behind the counter asks if I'm in a hurry, I ask why, she whispers, For the same price you can get a dozen. So I get two more spicy beef and a chicken. She whispers that I shouldn't tell anyone. I can't really figure out why I'm getting this special treatment. So I walk out of there with 12 patties. Total cost: $17.25. It's a good deal: they're small but not that small. Sure, I ate 6 of them, but I was hungry.

Chapter 2

A half hour later, I'm pulling keys out of my pocket lock up my bike and a bunch of change falls out. (Remember--the patties cost $17.25, so I had three quarters in my pocket, plus whatever happened to be there already.) I see all three quarters plus a couple of pennies. The change is on the street, and, as I'm leaning down to pick it up, I notice there's a parked car, right in front of me, with its engine running. There's no way the driver can see me if I'm bending down behind the rear wheels. And if he backs up, I'm dead meat.

It suddenly comes to me--this is what they mean when they talk about "picking pennies in front of a steamroller." That's exactly what I was about to do!

After a brief moment of indecision, I bent down and picked up the quarters. I left the pennies where they were, though.

P.S. The last time I experienced an economics cliche in real time was a few weeks ago, when I spotted $5 in cash on the street.

Hipmunk update


Florence from customer support at Hipmunk writes:

Hipmunk now includes American Airlines in our search results. Please note that users will be taken directly to to complete the booking/transaction. . . . we are steadily increasing the number of flights that we offer on Hipmunk.

As you may recall, Hipmunk is a really cool flight-finder that didn't actually work (as of 16 Sept 2010). At the time, I was a bit annoyed at the NYT columnist who plugged Hipmunk without actually telling his readers that the site didn't actually do the job. (I discovered the problem myself because I couldn't believe that my flight options to Raleigh-Durham were really so meager, so I checked on Expedia and found a good flight.)

I do think Hipmunk's graphics are beautiful, though, so I'm rooting for them to catch up.

P.S. Apparently they include Amtrak Northeast Corridor trains, so I'll give them a try, next time I travel. The regular Amtrak website is about as horrible as you'd expect.

Reihan Salam discusses a theory of Tyler Cowen regarding "threshold earners," a sort of upscale version of a slacker. Here's Cowen:

A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to "get by" in terms of absolute earning power in order to experience other gains in the form of leisure.

Salam continues:

This clearly reflects the pattern of wage dispersion among my friends, particularly those who attended elite secondary schools and colleges and universities. I [Salam] know many "threshold earners," including both high and low earners who could earn much more if they chose to make the necessary sacrifices. But they are satisficers.

OK, fine so far. But then the claim is made that "threshold earning" behavior increases income inequality. In Cowen's words:

The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn't so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality [emphasis added]. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.

This is a cute idea but I don't think it's correct. I'll explain my reasoning but first one more quote from Salam:

Alfred Kahn


Appointed "inflation czar" in late 1970s, Alfred Kahn is most famous for deregulating the airline industry. At the time this seemed to make sense, although in retrospect I'm less a fan of consumer-driven policies than I used to be. When I was a kid we subscribed to Consumer Reports and so I just assumed that everything that was good for the consumer--lower prices, better products, etc.--was a good thing. Upon reflection, though, I think it's a mistake to focus too narrowly on the interests of consumers. For example (from my Taleb review a couple years ago):

The discussion on page 112 of how Ralph Nader saved lives (mostly via seat belts in cars) reminds me of his car-bumper campaign in the 1970s. My dad subscribed to Consumer Reports then (he still does, actually, and I think reads it for pleasure--it must be one of those Depression-mentality things), and at one point they were pushing heavily for the 5-mph bumpers. Apparently there was some federal regulation about how strong car bumpers had to be, to withstand a crash of 2.5 miles per hour, or 5 miles per hour, or whatever--the standard had been 2.5 (I think), then got raised to 5, then lowered back to 2.5, and Consumer's Union calculated (reasonably correctly, no doubt) that the 5 mph standard would, in the net, save drivers money. I naively assumed that CU was right on this. But, looking at it now, I would strongly oppose the 5 mph standard. In fact, I'd support a law forbidding such sturdy bumpers. Why? Because, as a pedestrian and cyclist, I don't want drivers to have that sense of security. I'd rather they be scared of fender-benders and, as a consequence, stay away from me! Anyway, the point here is not to debate auto safety; it's just an interesting example of how my own views have changed. Another example of incentives.

Regarding airline deregulation, a lot of problems have been caused by cheap flights. And, even though I've personally benefited from the convenience, maybe overall we'd be better off with the old system of fewer, more expensive flights. Or maybe expansion was going to happen anyway, in which case it was probably a good idea to try to do things right.

Anyway . . . I never met Alfred Kahn but I heard a lot about him because he was my mother's adviser in college. She studied economics at Cornell and had only good things to say about Kahn, (She also took a course with Feller, the famed probabilist, but she didn't get so much out of that.) We were all very excited in 1978 or whenever it was when Kahn was in the news as the inflation czar. In a slightly different world, my mom would've been doing something like that, rather than staying at home with the kids and getting a mid-level job later in life.

P.S. I'm not claiming any expertise on airline deregulation! My point in bringing this up was to just indicate how my thinking (and that of others too, I'm sure) has changed since the 1970s. When the name of Alfred Kahn comes up, I'm immediately sent back in my mind to 1948 and 1978, so it's interesting to reflect upon intellectual and cultural changes since then.

A couple people pointed me to this recent news article which discusses "why, beyond middle age, people get happier as they get older." Here's the story:

When people start out on adult life, they are, on average, pretty cheerful. Things go downhill from youth to middle age until they reach a nadir commonly known as the mid-life crisis. So far, so familiar. The surprising part happens after that. Although as people move towards old age they lose things they treasure--vitality, mental sharpness and looks--they also gain what people spend their lives pursuing: happiness.

This curious finding has emerged from a new branch of economics that seeks a more satisfactory measure than money of human well-being. Conventional economics uses money as a proxy for utility--the dismal way in which the discipline talks about happiness. But some economists, unconvinced that there is a direct relationship between money and well-being, have decided to go to the nub of the matter and measure happiness itself. . . There are already a lot of data on the subject collected by, for instance, America's General Social Survey, Eurobarometer and Gallup. . . .

And here's the killer graph:


All I can say is . . . it ain't so simple. I learned this the hard way. After reading a bunch of articles on the U-shaped relation between age and happiness--including some research that used the General Social Survey--I downloaded the GSS data (you can do it yourself!) and prepared some data for my introductory statistics class. I made a little dataset with happiness, age, sex, marital status, income, and a couple other variables and ran some regressions and made some simple graphs. The idea was to start with the fascinating U-shaped pattern and then discuss what could be learned further using some basic statistical techniques of subsetting and regression.

But I got stuck--really stuck. Here was my first graph, a quick summary of average happiness level (on a 0, 1, 2 scale; in total, 12% of respondents rated their happiness at 0 (the lowest level), 56% gave themselves a 1, and 32% described themselves as having the highest level on this three-point scale). And below are the raw averages of happiness vs. age. (Note: the graph has changed. In my original posted graph, I plotted the percentage of respondents of each age who had happiness levels of 1 or 2; this corrected graph plots average happiness levels.)


Uh-oh. I did this by single years of age so it's noisy--even when using decades of GSS, the sample's not infinite--but there's nothing like the famous U-shaped pattern! Sure, if you stare hard enough, you can see a U between ages 35 and 70, but the behavior from 20-35 and from 70-90 looks all wrong. There's a big difference between the publishedl graph, which has maxima at 20 and 85, and the my graph from GSS, which has minima at 20 and 85.

There are a lot of ways these graphs could be reconciled. There could be cohort or period effects, perhaps I should be controlling for other variables, maybe I'm using a bad question, or maybe I simply miscoded the data. All of these are possibilities. I spent several hours staring at the GSS codebook and playing with the data in different ways and couldn't recover the U. Sometimes I could get happiness to go up with age, but then it was just a gradual rise from age 18, without the dip around age 45 or 50. There's a lot going on here and I very well may still be missing something important. [Note: I imagine that sort of cagey disclaimer is typical of statisticians: by our training we are so aware of uncertainty. Researchers in other fields don't seem to feel the same need to do this.]

Anyway, at some point in this analysis I was getting frustrated at my inability to find the U (I felt like the characters in that old movie they used to show on TV on New Year's Eve, all looking for "the big W") and beginning to panic that this beautiful example was too fragile to survive in the classroom.

So I called Grazia Pittau, an economist (!) with whom I'd collaborated on some earlier happiness research (in which I contributed multilevel modeling and some ideas about graphs but not much of substance regarding psychology or economics). Grazia confirmed to me that the U-shaped pattern is indeed fragile, that you have to work hard to find it, and often it shows up when people fit linear and quadratic terms, in which case everything looks like a parabola. (I'd tried regressions with age & age-squared, but it took a lot of finagling to get the coefficient for age-squared to have the "correct" sign.)

And then I encountered a paper by Paul Frijters and Tony Beatton which directly addressed my confusion. Frijters and Beatton write:

Whilst the majority of psychologists have concluded there is not much of a relationship at all, the economic literature has unearthed a possible U-shape relationship. In this paper we [Frijters and Beatton] replicate the U-shape for the German SocioEconomic Panel (GSOEP), and we investigate several possible explanations for it.

They conclude that the U is fragile and that it arises from a sample-selection bias. I refer you to the above link for further discussion.

In summary: I agree that happiness and life satisfaction are worth studying--of course they're worth studying--but, in the midst of looking for explanations for that U-shaped pattern, it might be worth looking more carefully to see what exactly is happening. At the very least, the pattern does not seem to be as clear as implied from some media reports. (Even a glance at the paper by Stone, Schwartz, Broderick, and Deaton, which is the source of the top graph above, reveals a bunch of graphs, only some of which are U-shaped.) All those explanations have to be contingent on the pattern actually existing in the population.

My goal is not to debunk but to push toward some broader thinking. People are always trying to explain what's behind a stylized fact, which is fine, but sometimes they're explaining things that aren't really happening, just like those theoretical physicists who, shortly after the Fleischmann-Pons experiment, came up with ingenious models of cold fusion. These theorists were brilliant but they were doomed because they were modeling a phenomenon which (most likely) doesn't exist.

A comment from a few days ago by Eric Rasmusen seems relevant, connecting this to general issues of confirmation bias. If you make enough graphs and you're looking for a U, you'll find it. I'm not denying the U is there, I'm just questioning the centrality of the U to the larger story of age, happiness, and life satisfaction. There appear to be many different age patterns and it's not clear to me that the U should be considered the paradigm.

P.S. I think this research (even if occasionally done by economists) is psychology, not economics. No big deal--it's just a matter of terminology--but I think journalists and other outsiders can be misread if they hear about this sort of thing and start searching in the economics literature rather than in the psychology literature. In general, I think economists will have more to say than psychologists about prices, and psychologists will have more insights about emotions and happiness. I'm sure that economists can make important contributions to the study of happiness, just as psychologists can make important contributions to the study of prices, but even a magazine called "The Economist" should know the difference.

This link on education reform send me to this blog on foreign languages in Canadian public schools:

Capitalism as a form of voluntarism

| 1 Comment

Interesting discussion by Alex Tabarrok (following up on an article by Rebecca Solnit) on the continuum between voluntarism (or, more generally, non-cash transactions) and markets with monetary exchange. I just have a few comments of my own:

1. Solnit writes of "the iceberg economy," which she characterizes as "based on gift economies, barter, mutual aid, and giving without hope of return . . . the relations between friends, between family members, the activities of volunteers or those who have chosen their vocation on principle rather than for profit." I just wonder whether "barter" completely fits in here. Maybe it depends on context. Sometimes barter is an informal way of keeping track (you help me and I help you), but in settings of low liquidity I could imagine barter being simply an inefficient way of performing an economic transaction.

2. I am no expert on capitalism but my impression is that it's not just about "competition and selfishness" but also is related to the ability of firms to build up and use capital. In that sense, I can see how it could be qualitatively different from barter. But I wonder whether Solnit is causing more confusion than clarity by lumping competition, selfishness, and capitalism into a single category. I'm reminded of my article with Edlin and Kaplan where we emphasize that "rational" != "selfish."

3. Tabarrok identifies capitalism with "markets," which again seems like only part of the picture. Sure, you can thing of Ebay, for example, as a more efficient version of neighbors sharing their unwanted objects, with all the advantages and all the disadvantages of "efficiency" (on one hand, you're more likely to get what you want and you can avoid interacting with annoying people; on the other hand, instead of interacting with people you're sitting at a computer--just as I am right now!). But there are a lot of other aspects of capitalism (from BP oil spills to plain old backstabbing corporate politics) that don't quite fit the "market" or "voluntary exchange" story. I'm not saying that capitalism is bad (or good), just that he seems to be talking more about trade than about capitalism in general.

Mark Palko comments on the (presumably) well-intentioned but silly Jumpstart test of financial literacy, which was given to 7000 high school seniors Given that, as we heard a few years back, most high school seniors can't locate Miami on a map of the U.S., you won't be surprised to hear that they flubbed item after item on this quiz.

But, as Palko points out, the concept is better than the execution:

Nate writes:

The Yankees have offered Jeter $45 million over three years -- or $15 million per year. . . But that doesn't mean that the process won't be frustrating for Jeter, or that there won't be a few hurt feelings along the way. . . .

$45 million, huh? Even after taxes, that's a lot of money!


Sciencedaily has posted an article titled Apes Unwilling to Gamble When Odds Are Uncertain:

The apes readily distinguished between the different probabilities of winning: they gambled a lot when there was a 100 percent chance, less when there was a 50 percent chance, and only rarely when there was no chance In some trials, however, the experimenter didn't remove a lid from the bowl, so the apes couldn't assess the likelihood of winning a banana The odds from the covered bowl were identical to those from the risky option: a 50 percent chance of getting the much sought-after banana. But apes of both species were less likely to choose this ambiguous option.
Like humans, they showed "ambiguity aversion" -- preferring to gamble more when they knew the odds than when they didn't. Given some of the other differences between chimps and bonobos, Hare and Rosati had expected to find the bonobos to be more averse to ambiguity, but that didn't turn out to be the case.

Thanks to Stan Salthe for the link.

The title of this blog post quotes the second line of the abstract of Goldstein et al.'s much ballyhooed 2008 tech report, Do More Expensive Wines Taste Better? Evidence from a Large Sample of Blind Tastings.

The first sentence of the abstract is

Individuals who are unaware of the price do not derive more enjoyment from more expensive wine.

Perhaps not surprisingly, given the easy target wine snobs make, the popular press has picked up on the first sentence of the tech report. For example, the Freakonomics blog/radio entry of the same name quotes the first line, ignores the qualification, then concludes

Wishing you the happiest of holiday seasons, and urging you to spend $15 instead of $50 on your next bottle of wine. Go ahead, take the money you save and blow it on the lottery.

In case you're wondering about whether to buy me a cheap or expensive bottle of wine, keep in mind I've had classical "wine training". After ten minutes of training with some side by side examples, you too will be able to distinguish traditional old world wine from 3-buck Chuck in a double blind tasting. Whether you'll be able to tell a quality village Volnay from a premier cru's another matter.

There's another problem with the experimental design. Wines that stand out in a side-by-side tasting are not necessarily the ones you want to pair with food or even drink all night on their own.

The other problem is that some people genuinely prefer the 3 buck Chuck. Most Americans I've observed, including myself, start out enjoying sweeter new world style wines and then over time gravitate to more structured (tannic), complex (different flavors) and acidic wines.

i received the following press release from the Heritage Provider Network, "the largest limited Knox-Keene licensed managed care organization in California." I have no idea what this means, but I assume it's some sort of HMO.

In any case, this looks like it could be interesting:

Participants in the Health Prize challenge will be given a data set comprised of the de-identified medical records of 100,000 individuals who are members of HPN. The teams will then need to predict the hospitalization of a set percentage of those members who went to the hospital during the year following the start date, and do so with a defined accuracy rate. The winners will receive the $3 million prize. . . . the contest is designed to spur involvement by others involved in analytics, such as those involved in data mining and predictive modeling who may not currently be working in health care. "We believe that doing so will bring innovative thinking to health analytics and may allow us to solve at least part of the health care cost conundrum . . ."

I don't know enough about health policy to know if this makes sense. Ultimately, the goal is not to predict hospitalization, but to avoid it. But maybe if you can predict it well, it could be possible to design the system a bit better. The current system--in which the doctor's office is open about 40 hours a week, and otherwise you have to go the emergency room--is a joke.

Sander Wagner writes:

I just read the post on ethical concerns in medical trials. As there seems to be a lot more pressure on private researchers i thought it might be a nice little exercise to compare p-values from privately funded medical trials with those reported from publicly funded research, to see if confirmation pressure is higher in private research (i.e. p-values are closer to the cutoff levels for significance for the privately funded research). Do you think this is a decent idea or are you sceptical? Also are you aware of any sources listing a large number of representative medical studies and their type of funding?

My reply:

This sounds like something worth studying. I don't know where to get data about this sort of thing, but now that it's been blogged, maybe someone will follow up.

"'Why work?'"


Tyler Cowen links to a "scary comparison" that claims that "a one-parent family of three making $14,500 a year (minimum wage) has more disposable income than a family making $60,000 a year."

Kaiser Fung looks into this comparison in more detail. As Kaiser puts it:

Xuequn Hu writes:

I am an econ doctoral student, trying to do some empirical work using Bayesian methods. Recently I read a paper(and its discussion) that pitches Bayesian methods against GMM (Generalized Method of Moments), which is quite popular in econometrics for frequentists. I am wondering if you can, here or on your blog, give some insights about these two methods, from the perspective of a Bayesian statistician. I know GMM does not conform to likelihood principle, but Bayesian are often charged with strong distribution assumptions.

I can't actually help on this, since I don't know what GMM is. My guess is that, like other methods that don't explicitly use prior estimation, this method will work well if sufficient information is included as data. Which would imply a hierarchical structure.

Rational addiction


Ole Rogeberg sends in this:

and writes:

No idea if this is amusing to non-economists, but I tried my hand at the xtranormal-trend. It's an attempt to spoof the many standard "incantations" I've encountered over the years from economists who don't want to agree that rational addiction theory lacks justification for some of the claims it makes. More specifically, the claims that the theory can be used to conduct welfare analysis of alternative policies.

See here (scroll to Rational Addiction) and here for background.

See below. W. D. Burnham is a former professor of mine, T. Ferguson does important work on money and politics, and J. Stiglitz is a colleague at Columbia (whom I've never actually met). Could be interesting.

I guess there's a reason they put this stuff in the Opinion section and not in the Science section, huh?

P.S. More here.

For awhile I've been curious (see also here) about the U-shaped relation between happiness and age (with people least happy, on average, in their forties, and happier before and after).

But when I tried to demonstrate it to me intro statistics course, using the General Social Survey, I couldn't find the famed U, or anything like it. Using pooled GSS data mixes age, period, and cohort, so I tried throwing in some cohort effects (indicators for decades) and a couple other variables, but still couldn't find that U.

So I was intrigued when I came across this paper by Paul Frijters and Tony Beatton, who write:

Whilst the majority of psychologists have concluded there is not much of a relationship at all, the economic literature has unearthed a possible U-shape relationship. In this paper we [Frijters and Beatton] replicate the U-shape for the German SocioEconomic Panel (GSOEP), and we investigate several possible explanations for it.

They write:

What is the relationship between happiness and age? Do we get more miserable as we get older, or are we perhaps more or less equally happy throughout our lives with only the occasional special event (marriage, birth, promotion, health shock) that temporarily raises or reduces our happiness, or do we actually get happier as life gets on and we learn to be content with what we have?

The answer to this question in the recent economic literature on the subject is that the age-happiness relationship is U-shaped. This finding holds for the US, Germany, Britain, Australia, Europe, and apparently even South Africa. The stylised finding is that individuals gradually get unhappier after their 18th birthday, with a dip around 50 followed by a gradual upturn in old age. The predicted effect of age can be quite large, i.e. the difference in average happiness between an 18 year old and a 50 year old can be as much as 1.5 points on a 10 point scale.

Their conclusion:

The inclusion of the usual socio-economic variables in a cross-section leads to a U-shape in age that results from indirectly-age-related reverse causality. Putting it simply: good things, like getting a job and getting married, appear to happen to middle aged individuals who were already happy. . . . The found effect of age in fixed-effect regressions is simply too large and too out of line with everything else we know to be believable. The difference between first-time respondents and stayers and between the number of years someone stays in the panel doesn't allow for explanations based on fixed traits or observables. There has to be either a problem on the left-hand side (i.e. the measurement of happiness over the life of a panel) or on the right-hand side (selection on time-varying unobservables).

They think it's a sample-selection bias and not a true U-shaped pattern. Another stylized fact bites the dust (perhaps).

. . . they're not in awe of economists.

In contrast, economists sometimes treat each other with the soft bigotry of low expectations. For example, here's Brad DeLong in defense of Larry Summers:

[During a 2005 meeting, Summers] said that in a modern economy with sophisticated financial markets we were likely to have more and bigger financial crises than we had before, just as the worst modern transportation accidents are worse than the worst transportation accidents back in horse-and-buggy days. . . . Indeed, for twenty years one of Larry's conversation openers has been: "You really should write something else good on positive-feedback trading and its dangers for financial markets."

That's fine, but, hey, I've been going around saying this for many years too, and I'm not even an economist (although I did get an A in the last econ class I took, which was in eleventh grade). Lots and lots of people have been talking for years about the dangers of positive feedback, the risks of insurers covering the small risks and thus increasing correlation in the system and setting up big risks, etc.

I don't think Summers, as one of the world's foremost economists, deserves much credit for noticing this theoretical problem too and going around telling people that they "really should write something" on the topic. You get credit by doing, not by telling other people to do.

I think Steve Hsu (see above link) gets the point. No one's going to go around saying that some physicist is a genius because he's been going around for twenty years with a conversation opener like, "Hey--general relativity and quantum mechanics are incoherent. You should really write something about how to put them together in a single mathematical model."

P.S. Just to be clear, I'm not trying to argue with DeLong on the economics here. He may be completely right that Rajan was wrong and Summers was right in their 2005 exchange. But I do think he's a bit too overawed by Summers's putative brilliance. In a dark room with many of the lights covered up by opaque dollar bills, even a weak and intermittent beam can appear brilliant, if you look right at it.

In response to my most recent post expressing bafflement over the Erving Goffman mystique, several commenters helped out by suggesting classic Goffman articles for me to read. Naturally, I followed the reference that had a link attached--it was for an article called Cooling the Mark Out, which analogized the frustrations of laid-off and set-aside white-collar workers to the reactions to suckers after being bilked by con artists.

Goffman's article was fascinating, but I was bothered by a tone of smugness. Here's a quote from Cooling the Mark Out that starts on the cute side but is basically ok:

In organizations patterned after a bureaucratic model, it is customary for personnel to expect rewards of a specified kind upon fulfilling requirements of a specified nature. Personnel come to define their career line in terms of a sequence of legitimate expectations and to base their self-conceptions on the assumption that in due course they will be what the institution allows persons to become.

It's always amusing to see white-collar types treated anthropologically, so that's fine. But then Goffman continues:

Sometimes, however, a member of an organization may fulfill some of the requirements for a particular status, especially the requirements concerning technical proficiency and seniority, but not other requirements, especially the less codified ones having to do with the proper handling of social relationships at work.

This seemed naive at best and obnoxious at worst. As if, whenever someone is not promoted, it's either because he can't do the job or he can't play the game. Unless you want to define this completely circularly (with "playing the game" retrospectively equaling whatever it takes to do to keep the job), this just seems wrong. In corporate and academic settings alike, lots of people get shoved aside either for reasons entirely beyond their control (e.g., a new division head comes in and brings in his own people) or out of simple economics.

Goffman was a successful organization man and couldn't resist taking a swipe at the losers in the promotion game. It wasn't enough for him to say that some people don't ascend the ladder; he had to attribute that to not fulfilling the "less codified [requirements] having to do with the proper handling of social relationships at work."

Well, no. In the current economic climate this is obvious, but even back in the 1960s there were organizations with too few slots at the top for all the aspirants at the bottom, and it seems a bit naive to suppose that not reaching the top rungs is necessarily a sign of improper handling of social relationships.

In this instance, Goffman seems like the classic case of a successful person who things that, hey, everybody could be a success where they blessed with his talent and social skills.

This was the only thing by Goffman I'd read, though, so to get a broader perspective I sent a note to Brayden King, the sociologist whose earlier post on Goffman had got me started on this.

King wrote:

People in sociology are mixed on their feelings about Goffman's scholarship. He's a love-him-or-hate-him figure. I lean more toward the love him side, if only because I think he really built up the symbolic interactionist theory subfield in sociology.

I think that one of the problems is that you're thinking of this as a proportion of variance problem, in which case I think you're right that "how you play the game" explains a lot less variance in job attainment than structural factors. Goffman wasn't really interested in explaining variance though. His style was to focus on a kind of social interaction and then try to explain the strategies or roles that people use in those interactions to engage in impression management. So, for him, a corporate workplace was interesting for the same reason an asylum is - they're both places where role expectations shape the way people interact and try to influence the perceptions that others have of them.

It's a very different style of scholarship, but nevertheless it's had a huge influence in sociology's version of social psych. The kind of work that is done in this area is highly qualitative, often ethnographic. From a variance-explanation perspective, though, I see your point. How much does "playing the game" really matter when the economy is collapsing and companies are laying off thousands of employees?

Suguru Mizunoya writes:

When we estimate the number of people from a national sampling survey (such as labor force survey) using sampling weights, don't we obtain underestimated number of people, if the country's population is growing and the sampling frame is based on an old census data? In countries with increasing populations, the probability of inclusion changes over time, but the weights can't be adjusted frequently because census takes place only once every five or ten years.

I am currently working for UNICEF for a project on estimating number of out-of-school children in developing countries. The project leader is comfortable to use estimates of number of people from DHS and other surveys. But, I am concerned that we may need to adjust the estimated number of people by the population projection, otherwise the estimates will be underestimated.

I googled around on this issue, but I could not find a right article or paper on this.

My reply: I don't know if there's a paper on this particular topic, but, yes, I think it would be standard to do some demographic analysis and extrapolate the population characteristics using some model, then poststratify on the estimated current population.

P.S. Speaking of out-of-date censuses, I just hope you're not working with data from Lebanon!

Chris Wiggins sends along this.

It's a meetup at Davis Auditorium, CEPSR Bldg, Columbia University, on Wed 10 Nov (that's tomorrow! or maybe today! depending on when you're reading this), 6-8pm.

Greg Kaplan writes:

I noticed that you have blogged a little about interstate migration trends in the US, and thought that you might be interested in a new working paper of mine (joint with Sam Schulhofer-Wohl from the Minneapolis Fed) which I have attached.

Briefly, we show that much of the recent reported drop in interstate migration is a statistical artifact: The Census Bureau made an undocumented change in its imputation procedures for missing data in 2006, and this change significantly reduced the number of imputed interstate moves. The change in imputation procedures -- not any actual change in migration behavior -- explains 90 percent of the reported decrease in interstate migration between the 2005 and 2006 Current Population Surveys, and 42 percent of the decrease between 2000 and 2010.

I haven't had a chance to give a serious look so could only make the quick suggestion to make the graphs smaller and put multiple graphs on a page, This would allow the reader to better follow the logic in your reasoning.

But some of you might be interested in the substance of the paper. In any case, it's pretty scary how a statistical adjustment can have such a large effect. (Not that, in general, there's any way to use "unadjusted" data. As Little and Rubin have pointed out, lack of any apparent adjustment itself corresponds to some strong and probably horrible assumptions.)

P.S. See here for another recently-discovered problem with Census data.

Consulting: how do you figure out what to charge?


I'm a physicist by training, statistical data analyst by trade. Although some of my work is pretty standard statistical analysis, more often I work somewhere in a gray area that includes physics, engineering, and statistics. I have very little formal statistics training but I do study in an academic-like way to learn techniques from the literature when I need to. I do some things well but there are big gaps in my stats knowledge compared to anyone who has gone to grad school in statistics. On the other hand, there are big gaps in most statisticians' physics and engineering knowledge compared to anyone who has gone to grad school in physics. Generally my breadth and depth of knowledge is about right for the kind of work that I do, I think.

But last week I was offered a consulting job that might be better done by someone with more conventional stats knowledge than I have. The job involves gene expression in different types of tumors, so it's "biostatistics" by definition, but the specific questions of interest aren't specialized biostats ones (there's no analysis of microarray data, for instance). I'm comfortable doing the work, but I'm not the ideal person for the job. I was very clear about that both in writing and on the phone, but the company wanted to hire me anyway: they need a few questions answered very quickly, and their staff is so overworked at the moment that they would rather have me -- I was suggested or at least mentioned by a friend who works at the company -- than have one of their people spend hours trying to track down someone else who can do the work right away, even if that person is better.

I said sure, but then had to decide how much to charge. I've only ever done five small consulting jobs, and I've charged as little as $80/hour (working for some ecologists who didn't have any money) and as much as $250/hour (consortium of insurance companies).

Picking a number out of the air, I'm charging $150/hour. Upon reflection, this feels low to me. Of course one way to think of it is: would I rather have spent three hours last night working on this project for $450, or would I have preferred doing whatever else I would have done instead but not making any money? (My wife is out of town and I hadn't made plans, so I probably just would have read or watched TV). By that standard I am charging a fair rate, I was happy enough working on this last night. But I also have to put in some time this weekend, when I might feel differently: I'll probably be giving up something more enjoyable this weekend. Still, overall I think that if I focus just on my own satisfaction in a limited sense, then $150/hour is OK.

On the other hand, I think that from the company's perspective, at least in this particular instance, they are getting a fantastic deal. Having spoken with the people they've had looking at the data up to now, I am definitely much better at this than they are!

So if the company is thinking "boy, this is absolutely fantastic, that we were able to get this so quickly and for so little money", while I'm thinking "Eh, OK, this isn't too bad and I'm getting enough money to pay for a year of cell phone service [or whatever]", then I feel like I should have asked for more (or should in the future).

I know there are people out there who charge much more. But on the other hand, some universities offer stats consulting for $80-$100/hour, although this is surely not the free-market rate.

For the future it would be good to have a better idea of how to set a rate.


Taleb + 3.5 years


I recently had the occasion to reread my review of The Black Swan, from April 2007.

It was fun reading my review (and also this pre-review; "nothing useful escapes from a blackbody," indeed). It was like a greatest hits of all my pet ideas that I've never published.

Looking back, I realize that Taleb really was right about a lot of things. Not that the financial crisis has happened, we tend to forget that the experts who Taleb bashes were not always reasonable at all. Here's what I wrote in my review, three and a half years ago:

On page 19, Taleb refers to the usual investment strategy (which I suppose I actually use myself) as "picking pennies in front of a steamroller." That's a cute phrase; did he come up with it? I'm also reminded of the famous Martingale betting system. Several years ago in a university library I came across a charming book by Maxim (of gun fame) where he went through chapter after chapter demolishing the Martingale system. (For those who don't know, the Martingale system is to bet $1, then if you lose, bet $2, then if you lose, bet $4, etc. You're then guaranteed to win exactly $1--or lose your entire fortune. A sort of lottery in reverse, but an eternally popular "system.")

Throughout, Taleb talks about forecasters who aren't so good at forecasting, picking pennies in front of steamrollers, etc. I imagine much of this can be explained by incentives. For example, those Long-Term Capital guys made tons of money, then when their system failed, I assume they didn't actually go broke. They have an incentive to ignore those black swans, since others will pick up the tab when they fail (sort of like FEMA pays for those beachfront houses in Florida). It reminds me of the saying that I heard once (referring to Donald Trump, I believe) that what matters is not your net worth (assets minus liabilities), but the absolute value of your net worth. Being in debt for $10 million and thus being "too big to fail" is (almost) equivalent to having $10 million in the bank.

So, yeah, "too big to fail" is not a new concept. But as late as 2007, it was still a bit of an underground theory. People such as Taleb screamed about, but the authorities weren't listening.

And then there are parts of the review that make me really uncomfortable. As noted in the above quote, I was using the much-derided "picking pennies in front of a steamroller" investment strategy myself--and I knew it! Here's some more, again from 2007:

I'm only a statistician from 9 to 5

I try (and mostly succeed, I think) to have some unity in my professional life, developing theory that is relevant to my applied work. I have to admit, however, that after hours I'm like every other citizen. I trust my doctor and dentist completely, and I'll invest my money wherever the conventional wisdom tells me to (just like the people whom Taleb disparages on page 290 of his book).

Not long after, there was a stock market crash and I lost half my money. OK, maybe it was only 40%. Still, what was I thinking--I read Taleb's book and still didn't get the point!

Actually, there was a day in 2007 or 2008 when I had the plan to shift my money to a safer place. I recall going on the computer to access my investment account but I couldn't remember the password, was too busy to call and get it, and then forgot about it. A few weeks later the market crashed.

If only I'd followed through that day. Oooohhh, I'd be so smug right now. I'd be going around saying, yeah, I'm a statistician, I read Taleb's book and I thought it through, blah blah blah. All in all, it was probably better for me to just lose the money and maintain a healthy humility about my investment expertise.

But the part of the review that I really want everyone to read is this:

On page 16, Taleb asks "why those who favor allowing the elimination of a fetus in the mother's womb also oppose capital punishment" and "why those who accept abortion are supposed to be favorable to high taxation but against a strong military," etc. First off, let me chide Taleb for deterministic thinking. From the General Social Survey cumulative file, here's the crosstab of the responses to "Abortion if woman wants for any reason" and "Favor or oppose death penalty for murder":

40% supported abortion for any reason. Of these, 76% supported the death penalty.

60% did not support abortion under all conditions. Of these, 74% supported the death penalty.

This was the cumulative file, and I'm sure things have changed in recent years, and maybe I even made some mistake in the tabulation, but, in any case, the relation between views on these two issues is far from deterministic!

Finally, a lot of people bash Taleb, partly for his idosyncratic writing style, but I have fond memories of both his books, for their own sake and because they inspired me to write down some of my pet ideas. Also, he deserves full credit for getting things right several years ago, back when the Larry Summerses of the world were still floating on air, buoyed by the heads-I-win, tails-you-lose system that kept the bubble inflated for so long.

Musical chairs in econ journals


Tyler Cowen links to a paper by Bruno Frey on the lack of space for articles in economics journals. Frey writes:

To further their careers, [academic economists] are required to publish in A-journals, but for the vast majority this is impossible because there are few slots open in such journals. Such academic competition maybe useful to generate hard work, however, there may be serious negative consequences: the wrong output may be produced in an inefficient way, the wrong people may be selected, and losers may react in a harmful way.

According to Frey, the consensus is that there are only five top economics journals--and one of those five is Econometrica, which is so specialized that I'd say that, for most academic economists, there are only four top places they can publish. The difficulty is that demand for these slots outpaces supply: for example, in 2007 there were only 275 articles in all these journals combined (or 224 if you exclude Econometrica), while "a rough estimate is that there are around 10,000 academics actively aspiring to publish in A-journals."

I agree completely with Frey's assessment of the problem, and I've long said that statistics has a better system: there are a lot fewer academic statisticians than academic economists, and we have many more top journals we can publish in (all the probability and statistics journals, plus the econ journals, plus the poli sci journals, plus the psych journals, etc), so there's a lot less pressure.

I wonder if part of the problem with the econ journals is that economists enjoy competition. If there were not such a restricted space in top journals, they wouldn't have a good way to keep score.

Just by comparison, I've published in most of the top statistics journals, but my most cited articles have appeared in Statistical Science, Statistica Sinica, Journal of Computational and Graphical Statistics, and Bayesian Analysis. Not a single "top 5 journal" in the bunch.

But now let's take the perspective of a consumer of economics journals, rather than thinking about the producers of the articles. From my consumer's perspective, it's ok that the top five journals are largely an insider's club (with the occasional exceptional article from an outsider). These insiders have a lot to say, and it seems perfectly reasonable for them to have their own journal. The problem is not the exclusivity of the journals but rather the presumption that outsiders and new entrants should be judged based on their ability to conform to the standards of these journals. The tenured faculty at the top 5 econ depts are great, I'm sure--but does the world really need 10,000 other people trying to become just like them??? Again, based on my own experience, some of our most important work is the stuff that does not conform to conventional expectations.

P.S. I met Frey once. He said, "Gelman . . . you wrote the zombies paper!" So, you see, you don't need to publish in the AER for your papers to get noticed. Arxiv is enough. I don't know whether this would work with more serious research, though.

P.P.S. On an unrelated note, if you have to describe someone as "famous," he's not. (Unless you're using "famous" to distinguish two different people with the same name (for example, "Michael Jordan--not the famous one"), but it doesn't look like that's what's going on here.)

I found a $5 bill on the street today.

Hendrik Juerges writes:

I am an applied econometrician. The reason I am writing is that I am pondering a question for some time now and I am curious whether you have any views on it.

One problem the practitioner of instrumental variables estimation faces is large standard errors even with very large samples. Part of the problem is of course that one estimates a ratio. Anyhow, more often than not, I and many other researchers I know end up with large point estimates and standard errors when trying IV on a problem. Sometimes some of us are lucky and get a statistically significant result. Those estimates that make it beyond the 2 standard error threshold are often ridiculously large (one famous example in my line of research being Lleras-Muney's estimates of the 10% effect of one year of schooling on mortality). The standard defense here is that IV estimates the complier-specific causal effect (which is mathematically correct). But still, I find many of the IV results (including my own) simply incredible.

Now comes my question: Could it be that IV is particularly prone to "type M" errors? (I recently read your article on beauty, sex, and power). If yes, what can be done? Could Bayesian inference help?

My reply:

I've never actually done any instrumental variables analysis, Bayesian or otherwise. But I do recall that Imbens and Rubin discuss Bayesian solutions in one of their articles, and I think they made the point that the inclusion of a little bit of prior information can help a lot.

In any case, I agree that if standard errors are large, then you'll be subject to Type M errors. That's basically an ironclad rule of statistics.

My own way of understanding IV is to think of the instrument has having a joint effect on the intermediate and final outcomes. Often this can be clear enough, and you don't need to actually divide the coefficients.

And here are my more general thoughts on the difficulty of estimating ratios.

Mankiw tax update


I was going through the blog and noticed this note on an article by Mankiw and Weinzierl who implied that the state only has a right to tax things that are "unjustly wrestled from someone else." This didn't make much sense to me--whether it's the sales tax, the income tax, or whatever, I see taxes as a way to raise money, not as a form of punishment. At the time, I conjectured this was a general difference in attitude between political scientists and economists, but in retrospect I realize I'm dealing with n=1 in each case.

See here for further discussion of taxing "justly acquired endowments."

The only reason I'm bringing this all up now is that I think it is relevant to our recent discussion here and here of Mankiw's work incentives. Mankiw objected to paying a higher marginal tax rate, and I think part of this is that he sees taxes as a form of punishment, and since he came by his income honestly he doesn't think it's fair to have to pay taxes on it. My perspective is slightly different, partly because I never thought of taxation as being restricted to funds that have been "unjustly wrestled."

Underlying this is a lot of economics, and I'm not presenting this as any sort of argument for higher (or lower) marginal tax rates. I'm just trying to give some insight into where Mankiw might be coming from. A.lot of people thought his column on this 80% (or 90% or 93%) marginal tax rate was a little weird, but if you start from the position that only unjust income should be taxed, it all makes a lot more sense.


Cyrus Samii, PhD candidate, Department of Political Science, Columbia University:
"Peacebuilding Policies as Quasi-Experiments: Some Examples"

Macartan Humphreys, Associate Professor, Department of Political Science, Columbia University:
"Sampling in developing countries: Five challenges from the field"

Friday 22 Oct, 3-5pm in the Playroom (707 International Affairs Building). Open to all.

There's only one Amtrak


Just was buying my ticket online. Huge amounts of paperwork . . . can't they contract out with Anyway, at the very end, I got this item:

Recommended: Add Quik-Trip Travel Protection

Get 24/7 protection for your trip with a plan that provides:

* Electronic and Sporting Equipment coverage up to $1,000
* Travel Delay coverage (delays of 6 hrs. or more) up to $150
* 24/7 Travel Emergency Assistance

Yes! For just $8.50 per traveler, I'd like to add Quik-Trip Travel Protection. This is $8.50 total. Restrictions apply, learn more.
No thanks. I decline Quik-Trip Travel Protection.

"Restrictions apply," huh? My favorite part, though, is "Travel Delay coverage (delays of 6 hrs. or more) up to $150." I can just imagine the formula they have: "Your delay is 8 hours and 20 minutes, huh? Let's look that up . . . it looks like you're entitled to $124. And thanks for riding Amtrak!" But if your delay is only 5 hours and 50 minutes, forget about it.

P.S. My most memorable Amtrak experience was several years ago when I found myself sitting next to an elderly gentleman who was reading through some official-looking documents. I gradually realized it was Rep. Mike Castle of Delaware. I started up a conversation and told him our research on political polarization, a topic which he knew all about, of course.


Wow--economists are under a lot of pressure. Not only do they have to keep publishing after they get tenure; they have to be funny, too! It's a lot easier in statistics and political science. Nobody expects us to be funny, so any little witticism always gets a big laugh.

P.S. I think no one will deny that Levitt has a sense of humor. For example, he ran this item with a straight face, relaying to NYT readers in October 2008 that "the current unemployment rate of 6.1 percent is not alarming."

P.P.S. I think this will keep me safe for awhile.

Tyler Cowen links to a blog by Greg Mankiw with further details on his argument that his anticipated 90% marginal tax rate will reduce his work level.

Having already given my thoughts on Mankiw's column, I merely have a few things to add/emphasize.

Greg Mankiw writes (link from Tyler Cowen):

Without any taxes, accepting that editor's assignment would have yielded my children an extra $10,000. With taxes, it yields only $1,000. In effect, once the entire tax system is taken into account, my family's marginal tax rate is about 90 percent. Is it any wonder that I [Mankiw] turn down most of the money-making opportunities I am offered?

By contrast, without the tax increases advocated by the Obama administration, the numbers would look quite different. I would face a lower income tax rate, a lower Medicare tax rate, and no deduction phaseout or estate tax. Taking that writing assignment would yield my kids about $2,000. I would have twice the incentive to keep working.

First, the good news

Obama's tax rates are much lower than Mankiw had anticipated! According to the above quote, his marginal tax rate is currently 80% but threatens to rise to 90%.

But, in October 2008, Mankiw calculated that Obama's would tax his marginal dollar at 93%. What we're saying, then, is that Mankiw's marginal tax rate is currently thirteen percentage points lower than he'd anticipated two years ago. In fact, Mankiw's stated current marginal tax rate of 80% is three points lower than the tax rate he expected to pay under a McCain administration! And if the proposed new tax laws are introduced, Mankiw's marginal tax rate of 90% is still three percentage points lower than he'd anticipated, back during the 2008 election campaign. I assume that, for whatever reason, Obama did not follow through on all his tax-raising promises.

To frame the numbers more dramatically: According to Mankiw's calculations, he is currently keeping almost three times the proportion of his income that he was expecting to keep under the Obama administration (and 18% more than he was expecting to keep under a hypothetical McCain administration). If the new tax plans are put into effect, Mankiw will still keep 43% more of his money than he was expecting to keep, only two years ago. (For those following along at home, the calculations are (1-0.80)/(1-0.93)=2.9, (1-0.80)/(1-0.83)=1.18, and (1-0.90)/(1-0.93)=1.43.)

Given that Mankiw currently gets to keep 20% of his money--rather than the measly 7% he was anticipating--it's no surprise that he's still working!

Now, the bad news

I don't think Mankiw has fully thought this through.

Steven Levitt writes:.

After noticing these remarks on expensive textbooks and this comment on the company that bribes professors to use their books, Preston McAfee pointed me to this update (complete with a picture of some guy who keeps threatening to sue him but never gets around to it).

The story McAfee tells is sad but also hilarious. Especially the part about "smuck." It all looks like one more symptom of the imploding market for books. Prices for intro stat and econ books go up and up (even mediocre textbooks routinely cost $150), and the publishers put more and more effort into promotion.

McAfee adds:

I [McAfee] hope a publisher sues me about posting the articles I wrote. Even a takedown notice would be fun. I would be pretty happy to start posting about that, especially when some of them are charging $30 per article.

Ted Bergstrom and I used state Freedom of Information acts to extract the journal price deals at state university libraries. We have about 35 of them so far. Like textbooks, journals have gone totally out of control. Mostly I'm focused on journal prices rather than textbooks, although of course I contributed a free text. People report liking it and a few schools, including Harvard and NYU, used it, but it fizzled in the marketplace. I put it in to see if things like testbanks make a difference; their model is free online, cheap ($35) printed. The beauty of free online is it limits the sort of price increases your book experienced.

Here is a link to the FOIA work

which also has some discussion of the failed attempts to block us.

By the way, I had a spoof published in "Studies in Economic Analysis", a student-run journal that was purchased by Emerald Press. Emerald charges about $35 for reprints. I wrote them a take-down notice since SEA didn't bother with copyright forms so I still owned the copyright. They took it down but are not returning any money they collected on my article, pleading a lack of records. These guys are the schmucks of all schmucks.

Partly in response to my blog on the Harlem Children's Zone study, Mark Palko wrote this:

Talk of education reform always makes me [Palko] deeply nervous. Part of the anxiety comes having spent a number of years behind the podium and having seen the disparity between the claims and the reality of previous reforms. The rest comes from being a statistician and knowing what things like convergence can do to data.

Convergent behavior violates the assumption of independent observations used in most simple analyses, but educational studies commonly, perhaps even routinely ignore the complex ways that social norming can cause the nesting of student performance data.

In other words, educational research is often based of the idea that teenagers do not respond to peer pressure. . . .

and this:

Recent Comments

  • C Ryan King: I'd say that the previous discussion had a feature which read more
  • K? O'Rourke: On the surface, it seems like my plots, but read more
  • Vic: I agree with the intervention-based approach -- spending and growth read more
  • Phil: David: Ideally I think one would model the process that read more
  • Bill Jefferys: Amplifying on Derek's comment: read more
  • Nameless: It is not uncommon in macro to have relationships that read more
  • derek: taking in each others' laundry It's more like the farmer read more
  • DK: #17. All these quadrillions and other super low p-values assume read more
  • Andrew Gelman: Anon: No such assumption is required. If you multiply the read more
  • anon: Doesn't this rely on some form of assumed orthogonality in read more
  • Andrew Gelman: David: Yup. What makes these graphs special is: (a) Interpretation. read more
  • David Shor: This seems pretty similar to the "Correlations" feature in the read more
  • David W. Hogg: If you want probabilistic results (probabilities over outcomes, with and read more
  • Cheryl Carpenter: Bob is my brother and he mentioned this blog entry read more
  • Bob Carpenter: That's awesome. Thanks. Exactly the graphs I was talking about. read more

About this Archive

This page is an archive of recent entries in the Economics category.

Decision Theory is the previous category.

Literature is the next category.

Find recent content on the main index or look in the archives to find all content.