Results matching “jeff lax”

Brendan Nyhan points me to this from Don Taylor:

Can national data be used to estimate state-level results? . . . A challenge is the fact that the sample size in many states is very small . . . Richard [Gonzales] used a regression approach to extrapolate this information to provide a state-level support for health reform:
To get around the challenge presented by small sample sizes, the model presented here combines the benefits of incorporating auxiliary demographic information about the states with the hierarchical modeling approach commonly used in small area estimation. The model is designed to "shrink" estimates toward the average level of support in the region when there are few observations available, while simultaneously adjusting for the demographics and political ideology in the state. This approach therefore takes fuller advantage of all information available in the data to estimate state-level public opinion.

This is a great idea, and it is already being used all over the place in political science. For example, here. Or here. Or here.

See here for an overview article, "How should we estimate public opinion in the states?" by Jeff Lax and Justin Phillips.

It's good to see practical ideas being developed independently in different fields. I know that methods developed by public health researchers have been useful in political science, and I hope that in turn they can take advantage of the progress we've made in multilevel regression and poststratification.

I was invited by the Columbia University residence halls to speak at an event on gay marriage. (I've assisted my colleagues Jeff Lax and Justin Phillips in their research on the topic.) The event sounded fun--unfortunately I'll be out of town that weekend so can't make it--but it got me thinking about how gay marriage and other social issues are so relaxing to think about because there's no need for doubt.

About half of Americans support same-sex marriage and about half oppose it. And the funny thing is, you can be absolutely certain in your conviction, from either direction. If you support, it's a simple matter of human rights, and it's a bit ridiculous to suppose that if gay marriage is allowed, it will somehow wreck all the straight marriages out there. Conversely, you can oppose on the clear rationale of wanting to keep marriage the same as it's always been, and suggest that same-sex couples can be free to get together outside of marriage, as they always could. (Hey, it was good enough for Abraham Lincoln and his law partner!)

In contrast, the difficulty of expressing opinions about the economy, or about foreign policy, is that you have to realize at some level that you might be wrong.

For example, even Paul Krugman must occasionally wonder whether maybe the U.S. can't really afford another trillion dollars of debt, and even William Beach (he of the 2.8% unemployment rate forecast, later updated to a still-implausible point forecast of 4.3%) must occasionally wonder whether massive budget cuts will really send the economy into nirvana.

Similarly, even John McCain must wonder on occasion whether it would've been better to withdraw from Iraq in 2003, or 2004, or 2005. And even a firm opponent of the war such as the Barack Obama of early 2008 must have occasionally thought that maybe the invasion wasn't such a bad idea on balance.

I don't really have anything more to say on this. I just think it's interesting how there can be so much more feeling of certainty about social policy.

Just chaid

Reading somebody else's statistics rant made me realize the inherent contradictions in much of my own statistical advice.

I dodged a bullet the other day, blogorifically speaking. This is a (moderately) long story but there's a payoff at the end for those of you who are interested in forecasting or understanding voting and public opinion at the state level.

Act 1

It started when Jeff Lax made this comment on his recent blog entry:

Nebraska Is All That Counts for a Party-Bucking Nelson

Dem Senator On Blowback From His Opposition To Kagan: 'Are They From Nebraska? Then I Don't Care'

Fine, but 62% of Nebraskans with an opinion favor confirmation... 91% of Democrats, 39% of Republicans, and 61% of Independents. So I guess he only cares about Republican Nebraskans...

I conferred with Jeff and then wrote the following entry for fivethirtyeight.com. There was a backlog of posts at 538 at the time, so I set it on delay to appear the following morning.

Here's my post (which I ended up deleting before it ever appeared):

John Kastellec, Jeff Lax, and Justin Phillips write:

Do senators respond to the preferences of their states' median voters or only to the preferences of their co-partisans? We [Kastellec et al.] study responsiveness using roll call votes on ten recent Supreme Court nominations. We develop a method for estimating state-level public opinion broken down by partisanship. We find that senators respond more powerfully to their partisan base when casting such roll call votes. Indeed, when their state median voter and party median voter disagree, senators strongly favor the latter. [emphasis added] This has significant implications for the study of legislative responsiveness, the role of public opinion in shaping the personnel of the nations highest court, and the degree to which we should expect the Supreme Court to be counter-majoritarian. Our method can be applied elsewhere to estimate opinion by state and partisan group, or by many other typologies, so as to study other important questions of democratic responsiveness and performance.

Their article uses Mister P and features some beautiful graphs.

Jeff Lax sends along this good catch from Ben Somberg, who noticed this from Washington Post writer Lori Montgomery:

If Congress doesn't provide additional stimulus spending, economists inside and outside the administration warn that the nation risks a prolonged period of high unemployment or, more frightening, a descent back into recession. But a competing threat -- the exploding federal budget deficit -- seems to be resonating more powerfully in Congress and among voters.

Somberg is skeptical, though, at least of the part about "resonating among voters." He finds that in four out of five recent polls, people are much more concerned about jobs than about the deficit:

"Everything's coming up Tarslaw!"

I just finished three novels that got me thinking about the nature of fiction.

First, How I Became a Famous Novelist, by Steve Hely. Just from seeing the back cover of the book, with its hilarious parody of a New York Times bestseller list, I was pretty sure I'd like it. And, indeed, after reading the first six pages or so, I was laughing so hard that I put the book aside to keep myself from reading it too fast--I wanted to savor the pleasure. How I Became... really is a great book--in some ways, the Airplane! of books, in that it is overflowing with jokes. Hely threw in everything he had, clearly confident he could come up with more funny stuff whenever, as needed. (I don't know if you remember, but movie comedies used to be stingy with the humor. They'd space out their jokes with boring bits of drama and elaborate set-pieces. Airplane! was special because it had more jokes than it knew what to do with.) Anyway, Hely's gimmick in How I Became... is to invent dozens of hypothetical books, newspapers, locations, etc. There are bits of pseudo-bestsellers from all sorts of genres. The main character ends up writing a Snow-Falling-on-Cedars-type piece of overwritten historical crap. I have to admit I felt a little envy when he recounts the over-the-top, yet plausible sequence of events that puts him on the bestseller list--I still think this could've been possible with Red State, Blue State if we had had some professional public relations help--but I guess that added to the bittersweet pleasure of reading the book.

The other thing I appreciated about How I Became... was its forthrightness about the effort required to write all of a book and put it all together. I know what he's talking about. It really is a pain in the ass to get a book into good shape. More so on my end: Nick Tarslaw had an editor, a luxury I don't have for my books. (I mean, sure, I have an editor and a copy editor, but the role of the former is mostly to acquire my book and maybe make a few comments and suggestions; we're not talking Maxwell Perkins here. And copy editors will catch some mistakes (and introduce about an equal number of their own), but, again, I'm the one (along with my coauthors) who are doing all the work.)

Finally, I should say that, the minute I started reading How I Became..., I happily recognized it as part of what might be called "The Office" genre of comic novels, along with, for example, Slab Rat, Then We Came to the End, and Personal Days. To me, Then We Came to the End was deeper, and left me with a better aftertaste, than How I Became..., but How I Became... had more flat-out funny moments, especially in its first half. (Set-ups are almost always better than resolutions.)

The next book I read recently was The Finder, by Colin Harrison, a very well-written and (I assume) well-researched piece of crap about a mix of lowlifes, killers, and big shots. The plot kept it moving, and I enjoyed the NYC local color. But, jeez, is it really necessary that the hero be, not only a good guy in every respect, but also happen to have rugged good looks, much-talked-about upper-body strength, and of course be gentle yet authoritative in the sack? Oh, and did I forget to mention, he's also the strong silent type? Does the main female character really have to be labeled by everybody as "pretty" or, occasionally "gorgeous"? Is it really required that the rich guy be a billionaire? Wouldn't a few million suffice? Etc.

Still, even though it insulted my intelligence and moral sensibilities a bit, The Finder was fun to read. One advantage of having no email for a week is that it freed up time to relax and read a couple of books.

Anyway, before reading How I became..., I would've just taken the above as Harrison's choices in writing his book, but now I'm wondering . . . Did Harrison do it on purpose? Did he think to himself, Hey, I wanna write a big bestseller this time, let me take what worked before and amp it up? I guess what I'm saying it, Hely's book has ruined the enjoyment I can get from trash fiction. At least for awhile.

Most recently, I was in the library and checked out The Dwarves of Death, an early novel (from 1990) of Jonathan Coe, author a few years ago of the instant-classic, The Rotter's Club. The Dwarves of Death isn't perfect--for one thing, it has plot holes you could thread the Spruce Goose through, and without needing any careful piloting--but it's just great. It's real in a way that How I Became... is not. This is not a slam on Hely's book, which is an excellent confection, it's more of a distinction between a dessert and a main course.

The Dwarves of Death had so many funny lines I forgot all of them. That said, it wasn't laugh-out-loud funny the way How I Became... was (especially in its . Then again, it didn't need to be.

The National Election Study is hugely important in political science, but, as with just about all surveys, it has problems of coverage and nonresponse. Hence, some adjustment is needed to generalize from sample to population.

Matthew DeBell and Jon Krosnick wrote this report summarizing some of the choices that have to be made when considering adjustments for future editions of the survey. The report was put together in consultation with several statisticians and political scientists: Doug Rivers, Martin Frankel, Colm O'Muircheartaigh, Charles Franklin, and me. Survey weighting isn't easy, and this sort of report is just about impossible to write--you can't help leaving things out. They did a good job, though, and it's great to have this stuff put down in an official way, so that people can work off it of it when going forward.

It's a lot harder to write a procedure for general use than to do a single analysis oneself.

Some corrections

I have a few corrections to add to the report that unfortunately didn't make it into the final version (no doubt because of space limitations):

To continue with our discussion (earlier entries 1, 2, and 3):

1. Pearl has mathematically proved the equivalence of Pearl's and Rubin's frameworks. At the same time, Pearl and Rubin recommend completely different approaches. For example, Rubin conditions on all information, whereas Pearl does not do so. In practice, the two approaches are much different. Accepting Pearl's mathematics (which I have no reason to doubt), this implies to me that Pearl's axioms do not quite apply to many of the settings that I'm interested in.

I think we've reached a stable point in this part of the discussion: we can all agree that Pearl's theorem is correct, and we can disagree as to whether its axioms and conditions apply to statistical modeling in the social and environmental sciences. I'd claim some authority on this latter point, given my extensive experience in this area--and of course, Rubin, Rosenbaum, etc., have further experience--but of course I have no problem with Pearl's methods being used on political science problems, and we can evaluate such applications one at a time.

2. Pearl and I have many interests in common, and we've each written two books that are relevant to this discussion. Unfortunately, I have not studied Pearl's books in detail and I doubt he's had the time to read my books in detail also. It takes a lot of work to understand someone else's framework, work that we don't necessarily want to do if we're already spending a lot of time and effort developing our own research programmes. It will probably be the job of future researchers to make the synthesis. (Yes, yes, I know that Pearl feels that he already has the synthesis, and that he's proved this to be the case, but Pearl's synthesis doesn't yet take me all the way to where I want to go, which is to do my applied work in social and environmental sciences.) I truly am open to the probability that everything I do can be usefully folded into Pearl's framework someday.

That said, I think Pearl is on shaky ground when he tries to say that Don Rubin or Paul Rosenbaum is making a major mistake in causal inference. If Pearl's mathematics implies that Rubin and Rosenbaum are making a mistake, then my first step would be to apply the syllogism the other way and see whether Pearl's assumptions are appropriate for the problem at hand.

3. I've discussed a poststratification example. As I discussed yesterday (see the first item here), a standard idea, both in survey sampling and causal inference, is to perform estimates conditional on background variables, and then average over the population distribution of the background variables to estimate the population average. Mathematically, p(theta) = sum_x p(theta|x)p(x). Or, if x is discrete and takes on only two values, p(theta) = (N_1 p(theta|x=1) + N_2 p(theta|x=2)) / (N_1 + N_2).

This has nothing at all to do with causal inference: it's straight Bayes.

Pearl thinks that if the separate components p(theta|x) are nonidentifiable, that you can't do this, and you should not include x in the analysis. He writes:

I [Pearl] would really like to see how a Bayesian method estimates the treatment effect in two subgroups where it is not identifiable, and then, by averaging the two results (with two huge posterior uncertainties) gets the correct average treatment effect, which is identifiable, hence has a narrow posterior uncertainly. . . . I have no doubt that it can be done by fine-tuned tweaking . . . But I am talking about doing it the honest way, as you described it: "the uncertainties in the two separate groups should cancel out when they're being combined to get the average treatment effect." If I recall my happy days as a Bayesian, the only operation allowed in combining uncertainties from two subgroups is taking a linear combination of the two, weighted by the (given) relative frequencies of the groups. But, I am willing to learn new methods.

I'm glad that Pearl is willing to learn new methods--so am I--but, no new methods are needed here! This is straightforward, simple Bayes. Rod Little has written a lot about these ideas. I wrote some papers on it in 1997 and 2004. Jeff Lax and Justin Phillips do it in their multilevel modeling and poststratification papers where, for the first, time, they get good state-by-state estimates of public opinion on gay rights issues. No "fine-tuned tweaking" required. You just set up the model and it all works out. If the likelihood provides little to no information on theta|x but it does provide good information on the marginal distribution of theta, then this will work out fine.

In practice, of course, nobody is going to control for x if we have no information on it. Bayesian poststratification really becomes useful in that it can put together different sources of partial information, such as data with small sample sizes in some cells, along with census data on population cell totals.

Please, please don't say "the correct thing to do is to ignore the subgroup identity." If you want to ignore some information, that's fine--in the context of the models you are using, it might even make sense. But Jeff and Justin and the rest of us use this additional information all the time, and we get a lot out of it. What we're doing is not incorrect at all. It's Bayesian inference. We set up a joint probability model and then work from it. If you want to criticize the probability model, that's fine. If you want to criticize the entire Bayesian edifice, then you'll have to go up against mountains of applied successes.

As I wrote earlier, you don't have to be a Bayesian (or, I could say, you don't have to be a Bayesian)--I have a great respect for the work of Hastie, Tibshirani, Robins, Rosenbaum, and many others who are developing methods outside the Bayesian framework)--but I think you're on thin ice if you want to try to claim that Bayesian analysis is "incorrect."

4. Jennifer and I and many others make the routine recommendation to exclude post-treatment variables from analysis. But, as both Pearl and Rubin have noted in different contexts, it can be a very good idea to include such variables--it's just not a good idea to include them as regression predictors.) If the only think you're allowed to do is regression (as in chapter 9 of ARM), then I think it's a good idea to exclude post-treatment predictors. If you're allowed more general models, then one can and should include them. I'm happy to have been corrected by both Pearl and Rubin on this one.

5. As I noted yesterday (see second-to-last item here), all statistical methods have holes. This is what motivates us to consider new conceptual frameworks as well as incremental improvements in the systems with which we are most familiar.

Summary . . . so far

I doubt this discussion is over yet, but I hope the above notes will settle some points. In particular:

- I accept (on authority of Pearl, Wasserman, etc.) that Pearl has proved the mathematical equivalence of his framework and Rubin's. This, along with Pearl's other claim that Rubin and Rosenbaum have made major blunders in applied causal inference (a claim that I doubt), leads me to believe that Pearl's axioms are in some way not appropriate to the sorts of problems that Rubin, Rosenbaum, and I work on: social and environmental problems that don't have clean mechanistic causation stories. Pearl believes his axioms do apply to these problems, but then again he doesn't have the extensive experience that Rosenbaum and Rubin have. So I think it's very reasonable to suppose that his axioms aren't quite appropriate here.

- Poststratification works just fine. It's straightforward Bayesian inference, nothing to do with causality at all.

- I have been sloppy when telling people not to include post-treatment variables. Both Rubin and Pearl, in their different ways, have been more precise about this.

- Much of this discussion is motivated by the fact, that, in practice, none of these methods currently solves all our applied problems in the way that we would like. I'm still struggling with various problems in descriptive/predictive modeling, and causation is even harder!

- Along with this, taste--that is, working with methods we're familiar with--matters. Any of these methods is only as good as the models we put into them, and we typically are better modelers when we use languages with which we're more familiar. (But not always. Sometimes it helps to liberate oneself, try something new, and break out of the implicit constraints we've been working on.)

To follow up on yesterday's discussion, I wanted to go through a bunch of different issues involving graphical modeling and causal inference.

Contents:
- A practical issue: poststratification
- 3 kinds of graphs
- Minimal Pearl and Minimal Rubin
- Getting the most out of Minimal Pearl and Minimal Rubin
- Conceptual differences between Pearl's and Rubin's models
- Controlling for intermediate outcomes
- Statistical models are based on assumptions
- In defense of taste
- Argument from authority?
- How could these issues be resolved?
- Holes everywhere
- What I can contribute

A political scientist writes:

Here's a question that occurred to me that others may also have. I imagine "Mister P" will become a popular technique to circumvent sample size limitations and create state-level data for various public opinion variables. Just wondering: are there any reasons why one wouldn't want to use such estimates as a state-level outcome variable? In particular, does the dependence between observations caused by borrowing strength in the multilevel model violate the independence assumptions of standard statistical models? Lax and Phillips use "Mister P" state-level estimates as a predictor, but I'm not sure if someone has used them as an outcome or whether it would be appropriate to do so
.

First off, I love that the email to me was headed, "mister p question." And I know Jeff will appreciate that too. We had many discussions about what to call the method.

To get back to the question at hand: yes, I think it should be ok to use estimates from Mister P as predictor or outcome variables in a subsequent analysis. In either case, it could be viewed as an approximation to a full model that incorporates your regression of interest, along with the Mr. P adjustments.

I imagine, though, that there are settings where you could get the wrong answer by using the Mr. P estimates as predictors or as outcomes. One way I could imagine things going wrong is through varying sample sizes. Estimates will get pooled more in the states with fewer respondents, and I could see this causing a problem. For a simple example, imagine a setting with a weak signal, lots of noise, and no state-level predictors. Then you'd "discover" that small states are all near the average, and large states are more variable.

Another way a problem could arise, perhaps, is if you have a state-level predictor that is not statistically significant but still induces a correlation. With the partial pooling, you'll see a stronger relation with the predictor in the Mr. P estimates than in the raw data, and if you pipe this through to a regression analysis, I could imagine you could see statistical significance when it's not really there.

I think there's an article to be written on this.

Jeff Lax and Justin Phillips posted this summary of attitudes on a bunch of gay rights questions:

gay.png

They did it all using multilevel regression and poststratification. And a ton of effort.

P.S. My only criticisms of the above graph are:

(a) I'd just put labels at 20%, 30%, 40%, etc. I think the labels at 25, 35, etc., are overkill and make the numbers harder to read. And the tick marks should be smaller.

(b) The use of color and the legend on the upper left are well done. But they should place the items in the legend in the same order as the averages in the graphs. Thus, it should be same-sex marriage, then 2nd parent acdoption, then civil unions, then health benefits, and so forth.

Gay marriage: a tipping point?

Fancy statistical analysis can indeed lead to better understanding.

Jeff Lax and Justin Phillips used the method of multilevel regression and poststratification ("Mister P"; see here and here) to estimate attitudes toward gay rights in the states. They put together a dataset using national opinion polls from 1994 through 2009 and analyzed several different opinion questions on gay rights.

Policy on gay rights in the U.S. is mostly set at the state level, and Lax and Phillips's main substantive finding is that state policies are strongly responsive to public opinion. However, in some areas, policies are lagging behind opinion somewhat.

A fascinating trend

Here I'll focus on the coolest thing Lax and Phillips found, which is a graph of state-by-state trends in public support for gay marriage. In the past fifteen years, gay marriage has increased in popularity in all fifty states. No news there, but what was a surprise to me is where the largest changes have occurred. The popularity of gay marriage has increased fastest in the states where gay rights were already relatively popular in the 1990s.

In 1995, support for gay marriage exceeded 30% in only six states: New York, Rhode Island, Connecticut, Massachusetts, California, and Vermont. In these states, support for gay marriage has increased by an average of almost 20 percentage points. In contrast, support has increased by less than 10 percentage points in the six states that in 1995 were most anti-gay-marriage--Utah, Oklahoma, Alabama, Mississippi, Arkansas, and Idaho.

Here's the picture showing all 50 states:

lax6.png

I was stunned when I saw this picture. I generally expect to see uniform swing, or maybe even some "regression to the mean," with the lowest values increasing the most and the highest values declining, relative to the average. But that's not what's happening at all. What's going on?

Some possible explanations:

- A "tipping point": As gay rights become more accepted in a state, more gay people come out of the closet. And once straight people realize how many of their friends and relatives are gay, they're more likely to be supportive of gay rights. Recall that the average American knows something like 700 people. So if 5% of your friends and acquaintances are gay, that's 35 people you know--if they come out and let you know they're gay. Even accounting for variation in social networks--some people know 100 gay people, others may only know 10--there's the real potential for increased awareness leading to increased acceptance.

Conversely, in states where gay rights are highly unpopular, gay people will be slower to reveal themselves, and thus the knowing-and-accepting process will go slower.

- The role of politics: As gay rights become more popular in "blue states" such as New York, Massachusetts, California, etc., it becomes more in the interest of liberal politicians to push the issue (consider Governor David Paterson's recent efforts in New York). Conversely, in states where gay marriage is highly unpopular, it's in the interest of social conservatives to bring the issue to the forefront of public discussion. So the general public is likely to get the liberal spin on gay rights in liberal states and the conservative spin in conservative states. Perhaps this could help explain the divergence.

Where do we go next in studying this?

- We can look at other issues, not just on gay rights, to see where this sort of divergence occurs, and where we see the more expected uniform swing or regression-to-the-mean patterns.

- For the gay rights questions, we can break up the analysis by demographic factors--in particular, religion and age--to see where opinions are changing the fastest.

- To study the "tipping point" model, we could look at survey data on "Do you know any gay people?" and "How many gay people do you know?" over time and by state.

- To study the role of politics, we could gather data on the involvement of state politicians and political groups on gay issues.

I'm sure there are lots of other good ideas we haven't thought of.

P.S. More here.

Stephen Senn quips: "A theoretical statistician knows all about measure theory but has never seen a measurement whereas the actual use of measure theory by the applied statistician is a set of measure zero."

Which reminds me of Lucien Le Cam's reply when I asked him once whether he could think of any examples where the distinction between the strong law of large numbers (convergence with probability 1) and the weak law (convergence in probability) made any difference. Le Cam replied, No, he did not know of any examples. Le Cam was the theoretical statistician's theoretical statistician, so there's your answer.

The other comment of Le Cam's that I remember was his comment when I showed him my draft of Bayesian Data Analysis. I told him I thought that chapter 5 (on hierarchical models) might especially interest him. A few days later I asked him if he'd taken a look, and he said, yes, this stuff wasn't new, he'd done hierarchical models back when he'd been an applied Bayesian back in the 1940s.

A related incident occurred when I gave a talk at Berkeley in the early 90s in which I described our hierarchical modeling of votes. One of my senior colleagues--a very nice guy--remarked that what I was doing was not particularly new; he and his colleagues had done similar things for one of the TV networks at the time of the 1960 election.

At the time, these comments irritated me. But, from the perspective of time, I now think that they were probably right. Our work in chapter 5 of Bayesian Data Analysis is--to put it in its best light--a formalization or normalization of methods that people had done in various particular examples and mathematical frameworks. (Here I'm using "normalization" not in the mathematical sense of multiplying a function by a constant so that it sums to 1, but in the sociological sense of making something more normal.) Or, to put it another way, we "chunked" hierarchical models, so that future researchers (including ourselves) could apply them at will, allowing us to focus on the applied aspects of our problems rather than on the mathematics.

To put it another way: why did Le Cam's hierarchical Bayesian work in the 1940s and my other colleague's work in 1960s not lead to more widespread use of these methods? Because these methods were not yet normalized--there was not a clear separation between the math, the philosophy, and the applications.

To focus on a more specific example, consider the method of multilevel regression and poststratification ("Mister P"), which Tom Little and I wrote about in 1997, then David Park, Joe Bafumi and I picked back up in 2004, and then finally took off with the series of articles by Jeff Lax and Justin Phillips (see here and here). This is a lag of over 10 years, but really it's more than that: when Tom and I sent our article to the journal Survey Methodology back in 2006, the reviews said basically that our article was a good exposition of a well-known method. Well-known, but it took many many steps before it became normalized.

The Next Supreme Court Justice

My quick take on the Souter replacement is that, with 59 Democratic senators and high popularity, Obama could nominate Pee Wee Herman to the Supreme Court and get him confirmed. But I'm no expert on this. The experts are my colleagues down the hall, John Kastellec, Jeff Lax, and Justin Phillips, who wrote this article on public opinion and senate confirmation of Supreme Court nominees. They find:

Greater public support strongly increases the probability that a senator will vote to approve a nominee, even after controlling for standard predictors of roll call voting. We also find that the impact of opinion varies with context: it has a greater effect on opposition party senators, on ideologically opposed senators, and for generally weak nominees.
More discussion, and some pretty graphs, below.

What do Americans think of gay rights?

Justin Phillips, Jeff Lax, and I wrote this article summarizing some of the findings of their recent research on gay rights in the states:

In his address at the Democratic convention, Barack Obama said, "surely we can agree that our gay and lesbian brothers and sisters deserve to visit the person they love in the hospital and to live lives free of discrimination."

What was he thinking, saying this to the nation? California was on the way to a contentious battle over same-sex marriage and the issue has arisen in other states as well. Isn't gay rights a wedge issue that Democrats should try to avoid?

Yes, Americans are conflicted about same-sex marriage, but one thing they mostly agree on is support for antidiscrimination laws.

In surveys, 72% of Americans support laws prohibiting employment discrimination on the basis of sexual orientation. An even greater number answer yes when asked, "Do you think homosexuals should have equal rights in terms of job opportunities?" This consensus is remarkably widespread: in all states a majority support antidiscrimination laws protecting gays and lesbians, and in all but 10 states this support is 70% or higher.

But people do not uniformly support gay rights. When asked whether gays should be allowed to work as elementary school teachers, 48% of Americans say no. We could easily understand a consistent pro-gay or anti-gay position. But what explains this seeming contradiction within public opinion so that gays should be legally protected against discrimination but at the same time not be allowed to be teachers?

If anything, we could imagine people holding an opposite constellation of views, saying that gays should not be forbidden to be public school teachers but still allowing private citizens to discriminate against gays. A libertarian, for example, might take that position, but it does not appear to be popular among Americans.

We understand the contradictory attitude on gay rights in terms of framing.

This is cool stuff (by Jeff Lax and Justin Phillips).

John Kastellec sent me this attractive paper:

We [Kastellec et al.] study the relationship between state-level public opinion and the roll call votes of senators on Supreme Court nominees. Applying recent advances in multilevel modeling, we use national polls on nine recent Supreme Court nominees to produce state-of-the-art estimates of public support for the confirmation of each nominee in all 50 states. We show that greater public support strongly increases the probability that a senator will vote to approve a nominee, even after controlling for standard predictors of roll call voting. We also find that the impact of opinion varies with context: it has a greater effect on opposition party senators, on ideologically opposed senators, and for generally weak nominees. These results establish a systematic and powerful link between constituency opinion and voting on Supreme Court nominees.

Another triumph of the Lax/Phillips approach of linking policy to state-level opinion (see also here). Also another example of the synergy that's supposed to happen with an academic department, with Jeff, Justin, John, and myself each bringing unique contributions. I don't think any of this would've happened if we weren't all brought together with repeated interactions on the 7th floor.

"So the polls must be wrong"

Jeff Lax sends along this article:

Are the polls obscuring the reality that Barack Obama is beating Hillary Clinton in the race for the Democratic nomination for president? Drew Cline, the editorial page editor of New Hampshire's Union Leader thinks so.

Based on money-raising and visible support on the streets of New Hampshire, "the evidence shows that Obama has broader support than is being picked up by the polls," Cline writes at his Union Leader blog. "So the polls must be wrong."
. . .
"Think of it like a House, M.D. episode. When you have a test result you know is accurate (in this case, the fund-raising numbers) that contrasts with a symptom or test result you can't explain (the poll numbers), you go with what you know is right and keep testing the other one until they match." . . .

Much as I hate to contradict anyone named "Drew," I have to admit that a natural explanation for the discrepancy is that the visible support he's seen on the "streets of New Hampshire" does not represent a random sample of primary voters. Of course, as I never tire of saying, a poll is a snapshot, not a forecast, and things can definitely change.

Two different people (Christoper Mann and Jeff Lax) pointed me to this graph in the Wall Street Journal that features a goofy regression line. My expertise on taxes and economic growth is zero, and the statistical problems with the regression line are apparent, so I don't really have anything to say here. Hey, if all roads go through Rome, it's only fair that all lines go through Norway.

But, to get serious for a minute . . . Setting aside the concerns with the regression line or with measurement issues in defining the variables being graphed, it's an interesting reminder of the duality between descriptive vs. causal inference and aggregate vs. individual-level analysis (or, as would be said in psychology, between-subject vs. within-subject analysis). I'm not criticizing the use of graphs such as these (or corresponding regression models) that use between-country comparisons to make implicit causal inferences about policies--it's just helpful to remember the assumptions needed to draw these conclusions.

1