February 2005 Archives

Is voting contagious?

| 2 Comments

David Nickerson sent me the following message:

I saw your post from February 3rd complaining about the lack of connection between social networks and voter turnout. I'm just finishing up my dissertation in political science at Yale (under Don Green) before starting at Notre Dame next year. The dissertation is on behavioral contagion and a couple of chapters look at voter turnout. Attached is one chapter describing a randomized field experiment I conducted to determine how the degree to which voting is contagious within a household. You might find it of interest (though the network involved is fairly small).

I'm also working with Dean Karlan (from Princeton economics) to
broaden the scope of contagion experiments to see whether voting is
contagious across households (and if so, how far). We're at the beginning stages of the research, but think it might be fruitful.

At the very least, I'm approaching the topic from a very different
direction from Meredith Rolfe (whose work looks interesting). I thought you'd be interested to see that at least one other graduate student is working on linking social networks to voting behavior.

International data

| No Comments

Contingency and alternative history

| 3 Comments

This might not seem like it has much connection to statistics, but bear with me . . .

Alternative history--imaginings of different versions of this world that could have occurred if various key events in the past had been different--is a popular category of science fiction. Alternative history stories come in a number of flavors but a common feature of the best of the novels in this subgenre is that the alternate world is not "real."

Let's consider the top three alternative history novels (top three not in sales but in critical reputation, or at least my judgment of literary quality): The Man in the High Castle, Pavane, and Bring the Jubilee. (warning: spoilers coming)

Causal inference and decision trees

| 2 Comments

Causal inference and decision analysis are two areas of statistics in which I've seen very little overlap: the work in causal inference is typically very "foundational" with continuing reassessment based on first principles, whereas decision analysis is more of meat-and-potatoes Bayesian inference--slap down a probability model, stick in a utility function, and turn the crank. (With all this processing, this must be ground beef and mashed potatoes.)

Actually, though, causal inference and decision analysis are connected at a fundamental level. Both involve manipulation and potential outcomes. In causal inference, the "causal effect" (or, as Michael Sobel would say, the "effect") is the difference between what would happen under treatment A and what would happen under treatment B. The key to this definition is that either treatment could be applied to the experimental unit by some agent (the "experimenter").

In parallel, decision analysis concerns what would happen if decision A or decision B were chosen. When drawing decision trees, we let squares and circles represent decision and uncertainty nodes, respectively. To map on to causal inference, the squares would represent potential treatments and the circles would represent uncertainty in outcomes--or population variability.

In practice, the two areas of research are not always so closely connected. For example, in our decision analysis for home radon, the key decision is whether to remediate your house for radon. The causal effect of this decision on reducing the probability of lung cancer death is assumed to follow a specified functional form as estimated from previous studies. For our decision analysis we don't worry about too much about the details of where that estimate came from.

But in thinking about causal effects, the decision-making framework might be helpful in distinguishing among different possible potential-outcome frameworks.

Jasjeet Sekhon reports:

I recently released a new version of my Matching package for R. The new version has a function, called GenMatch(), which finds optimal balance using multivariate matching where a genetic search algorithm determines the weight each covariate is given. The function never consults the outcome and is able to find amazingly good balance in datasets where human researchers have failed to do so. I'm writing a paper on this algorithm right now.

The software, along with some examples, is here.

We also had a discussion of matching a few months ago on the blog.

Bayes for medical diagnosis

| 2 Comments

Here's a cool little paper by Christopher Gill, Lora Sabin, and Chris Schmid, on the use of Bayesian methods for medical diagnosis. (The paper will appear in the British Medical Journal.) The paper has a very readable explanation of Bayesian reasoning in a clinical context.

I don't really agree with their claim that "clinicians are natural Bayesians" (see here for my comments on that concept) but I agree that Bayesian inference seems like the right way to go, at least in the examples discussed in this paper.

I had a little dialogue with Meredith Rolfe after reading her papers on political participation and social networks:

Chris Schmid (statistics, New England Medical Center) writes:

We're trying to make a prediction equation for GFR which is the rate at which the kidney filters stuff out. It depends on a bunch of factors like age, sex, race and lab values like the serum creatinine level. We have a bunch of databases in which these things are measured and know that the equation depends on factors such as presence of diabetes, renal transplantation and the like. Physiologically, the level of creatinine depends on the GFR but we can measure creatinine more easily than GFR so want the inverse prediction. Two complicating factors are measurement error in creatinine and GFR as well as the possibility that the doctor may have some insight into the patient's condition that may not be available in the database. We have been proceding along the lines of linear regression, but I suggested that a Bayesian approach might be able to handle the measurement error and the prior information. I'm attaching some notes I wrote up on the problem.

So, we have a development dataset to determine a model, a validation set to test it on and then new patients on whom the GFR would need to be predicted as well as some missing data on potential important variables. What I am not clear about is how to use a prior for the prediction model, if this uses information not available in the dataset. So we'd develop a Bayesian scheme for estimating the posteriors of the regression coefficients and true unknown lab values but would then need to apply it to single individuals with measure of creatinine and some covariates. The prior on the regression parameters would come from the posterior of the data analysis, but wouldn't the doctor's intuitive sense of the GFR level need to be incorporated also and since it's not in the development dataset, how would that be done? It seems to me that you'd need a different model for the prediction than for the data analysis. Or is it that you want to use the data analysis to develop good priors to use in a new model?

A Bayesian approach would definitely be the natural way to handle the measurement error. I would think that substantive prior information (such as doctor's predictions) could be handled in some way as regression predictors, rather than directly as prior distributions. Then the data would be able to assess, and automatically calibrate, the relevance of these predictors for the observed data (the "training set" for the predictive model).

Any other thoughts?

Power calculations

| No Comments

I've been thinking about power calculations recently because some colleagues and I are designing a survey to learn about social and political polarization (to what extent are people with different attitudes clustered in the social network?). We'd like the survey to be large enough, and with precise enough questions, so that we can have a reasonable expectation of actually learning something useful. Hence, power calculations.

Carrie McLaren of Stay Free magazine had a self-described "rant" about Blink, the new book by science writer Malcolm Gladwell. I'll give Carrie's comments below, but my interest here isn't so much in Gladwell's book (which seems really cool) or Carrie's specific comments (which are very thought-provoking, and she also points to this clarifying discussion by Gladwell and James Surowecki in Slate magazine).

Political ideology and attitudes toward technology

Right now, though, I'm more interested in what these exchanges reveal about the intersections of political ideology and attitudes toward technology. Historically, I think of technology as being on the side of liberals or leftists (as compared with conservatives who would want to stick with the old ways). Technology = "the Enlightenment" = leftism, desire for change, etc. Even into the 20th century, I'd see this connection, with big Soviet steel factories and New Deal dams. But then, in the 1960s and 1970s?, it seems to me there was a flip, in which technology is associated with atomic bombs, nuclear power, and other things that are more popular on the right than on the left. The environmentalist left has been more skepical about technological solutions. In another area of scientific debate, right-leaning scientists have embraced sociobiology and related ideas of bringing genetics into social policy.

But...perhaps recently things have switched back? In battles over the teaching of evolution, it is the liberals who are defending the scientific method and conservatives who are holding back, wanting to respect local culture rather than scientific universals. Similarly with carbon dioxide and climate change.

But, again, I'm not trying here to argue the merits of any of these issues but rather to ask whether it is almost a visceral thing, at any point in time, with one's political allegiances being associated with a view of science.

Is Gladwell's argument inherently anti-rational? Is anti-rationality conservative?

This is what I saw in Carrie's posting on Gladwell. She was irritated by his use of scientific studies to support a sort of irrationalism--a favoring of quick judgments instead of more reasoned analyses. From this perspective, Gladwell's apparent advocacy of unconscious decisions is a form of conservatism. (His position seems more nuanced to me, at least as evidenced in the Slate interview--where he suggests sending police out individually instead of in pairs so they won't be emboldened to overreact--but perhaps Carrie's take on it is correct in the sense that she is addressing the larger message of the book as it is perceived by the general public, rather than any specific attitudes of Gladwell.)

Rationality and ideology

As a larger issue, in the social sciences of recent decades, I think of belief in rationality and "rational choice modeling" as conservative, both in the sense that many of the researchers in this area are politically conservative and in the sense that rationality is somehow associated with "cold-hearted" or conservative attitudes on cost-benefit analyses. But at the same time, quantitative empirical work has been associated with left-leaning views--think of Brown v. Board of Education, or studies of income and health disparities. There's a tension here, because in the social sciences, the people who can understand the technical details of empirical statistical work are the ones who can understand rational choice modeling (and vice versa). So I see all this stuff and keep getting bounced back and forth.

(I'm sure lots has been written about this--these ideas are related to a lot of stuff that Albert Hirschman has written on--and I'd appreciate relevant references, of course. Especially to empirical studies on the topic.)

Mythinformation

| 2 Comments

"Women's Work: The First 20,000 Years" is one of the coolest books I've ever read, and so I was thrilled to find that Elizabeth Wayland Barber has just come out with a new book (coauthored with her husband, Paul Barber), "When They Severed Earth from Sky : How the Human Mind Shapes Myth". This one's also fascinating. The topic this time is myths or, more generally, stories passed along orally, sometimes for thousands of years. No statistical content here (unless you want to think of statistics very generally as an "information science"; there is in fact some discussion about the ways in which information can be encoded in a way that can be remembered in stories), so it's hard for me to evaluate their claims, but it all seems reasonable to me.

Having read this book, I have a much better sense of the sense in which these stories can be informative without being literally true (in fact, without referring to any real persons in many cases).

Parent to children asset transfers

| 1 Comment

A few words to complement what has been said:
In econ, asset and income are thought of as stock and flow, respectively, and are related through some formula asset=f(future incomes), so I find it sensible to explain any discrepancy between the lhs and rhs by other factors...less so in practice: proxies for the lhs and the rhs are only as good as stock prices and accounting statements, respectively, in reflecting economic value...

In a social context, it would be necessary to know what is included in assets, and income. For example, it has been said that there are relatively few material asset transfers within US families, but what if investment in education is included? In comparison, this isn't discretionary in many other countries, as it is financed by taxation.

Another example: an increase in national debt can be thought of as a transfer in wealth from juniors to seniors, again complicating the definition of asset transfers.

Esther Duflo (economics, MIT) just gave a talk here at the School of Social Work, on "political reservations" for women and minorities--that is, electoral rules that set aside certain political offices for women and ethnic minorities. Different countries have different laws along these lines, for example reserving some percentage of members of a legislature for women.

An almost-randomized experiment in India

Duflo talked about a particular plan in India which reserved to women, on a rotating basis, one-third of Village Council head positions. Each election (elections are held every five years), a different one-third of the villages must elect women leaders. (There was also some reservations for ethnic minorities but she did not go into detail on that in her talk.)

Duflo's findings

Duflo and her colleagues took advantage of the fact that this system is a "natural experiment," with an essentially random 1/3 of the villages being selected for the treatment each year. They compared the "treated" and "control" villages using data from a national survey to assess the quality and perceptions about public services in the villages. The survey also included objective measures of the quality and quantity of the services (water, education, transportation, fair price shops, and public health facilities). These objective measures were crucial because they allowed the researchers to distinguish between perceptions and reality.

They found that, on average, the quantity and quality of the services were higher in the villages whose leaders were restricted to be women. There's a lot of variation among villages, and as a result the average differences are not large compared to the standard errors
(the avg difference in quantity of services is 1.9 se's away from 0, and the avg difference in quality of services is 1.5 se's away from 0). So they'd be characterized as "suggestive" rather than "statistically significant," I'd say. Nonetheless, it's interesting to see this improvement in performance. Because the treatment was essentially randomized (every third village on a list was selected), it would seem reasonable to attribute these changes to the treatment and not to unmeasured observational factors.

OK, so far so good. But here's something else: they also compared the satisfaction of survey respondents in the villages. On average, people in the villages that were restricted to be headed by women were less satisfied about the public services. This also was barely "statistically significant" (people were, on average, 2% less satisfied, with a standard error of 1%) but interesting. Duflo cited a bunch of papers on biased judgment which suggest that people may very well judge women to be poor leaders, even if they outperform men in comparable positions.

Thus, it seems quite plausible from the data that reserving leadership positions for women could be beneficial--even if the people receiving these benefits don't realize it!

Some statistical comments

As Duflo emphasized in her talk, the #1 reason they could do a study here was that the "treatment" of reserving political spaces for women was essentially randomly assigned across villages. Random assignment is good, also assigning across villages is good because it gives a high N (over 900).

There are a couple ways in which I think the analysis could be improved. First, I'd like to control for pre-treatment measurements at the village level. Various village-level information is available from the 1991 Indian Census, including for example some measures of water quality. I suspect that controlling for this information would reduce the standard errors of regression coefficients (which is an issue given that most of the estimates are less than 2 standard errors away from 0). Second, I'd consider a multilevel analysis to make use of information available at the village, GP, and state levels. Duflo et al. corrected the standard errors for clustering but I'd hope that a full multilevel analysis could make use of more information and thus, again, reduce uncertatinties in the regression coefficients.

References

Duflo's papers on this are here and (with Petia Topalova) here and (with Raghabendra Chattopadhyay) here.

DIC (the Deviance Information Criterion of Spiegelhalter et al.) is a good idea and, I think, the right way to generalize AIC (the Akaike Information Criterion) when trying to get a rough estimate of predictive accuracy for complex models. We discuss DIC in Section 6.7 of Bayesian Data Analysis (second edition) and illustrate its use with the 8-schools example.

However, some practical difficulties can arise:

1. In the examples I've worked on, pD and DIC are computationally unstable. You need a lot more simulations to get a good estimate of pD and DIC than to get a good estimate of parameters in the model. If the simulations are far from convergence, the estimates of pD and DIC can be particularly weird.

Because of this instability, I don't actually recommend using DIC to pick a model. Actually, I don't recommend the use of any automatic criterion to pick a model (although, if I had to choose a criterion, I'd prefer a predictive-error measure such as DIC, rather than something like BIC that I don't fully understand). But I can see that DIC could be useful for understanding how a set of models fit together.

2. bugs and bugs.R use different formulas for pD. bugs uses the formula from Spiegelhater et al, whereas bugs.R uses var(deviance)/2 Asymptotically, both formulas are correct, but with finite samples I really don't know. I'd expect that the Spiegelhalter et al. formula is better--I say this just because they've thought harder about these things than I have, and I assume they came up with their formula for good reasons! The reason why I used a different formula is that the bugs output does not, in general, provide enough info, in general, for me to compute their formula.

Daniel Scharfstein (http://commprojects.jhsph.edu/faculty/bio.cfm?F=Daniel&L=Scharfstein) recently gave a very good talk at the Columbia Biostatistics Department. He presented an application of causal inference using principal stratification. The example was similar to something I've heard Don Rubin and others speak about before, but I realized I'd been missing something important about this particular example.

Atul Gawande wrote an interesting article in the New Yorker a couple months ago on the varying effectiveness of medical centers around the U.S. in treating cystic fibrosis (CF), a genetic disease that reduces lung functioning in children. Apparently, the basic treatment is the same everywhere--keep the kid's lungs as clear as possible, from an early age--but some hospitals are much better at it than others: "In 2003, life expectancy with CF had risen to thirty-three years nationally, but at the best center it was more than forty-seven."

I'll discuss the article and give a long quote from it, then give my thoughts.

Agressive doctors

Gawande goes to an average-performing center (in Cincinnati) and the nuber-one center (in Minneapolis) and inteviews and observes people at both places. The difference, at least in how he reports it, is that in the top center the doctors are super-agressive and really involved with each patient, getting to know them individually and figuring out what it takes to get them on the treatment:

Tim Halpin-Healy (Physics, Barnard College) spoke today at the Collective Dynamics Group on "The Dynamics of Conformity and Dissent". Unfortunately I wasn't able to attend his talk--it looked interesting--but I have to say, speaking curmudgeonly and parochially as a political scientist, that I wish physicists wouldn't use loaded words like "conformity" and "dissent" for these mathematical simplifications. (Conversely, I don't like it when social scientists refer sloppily to uncertainty principles and quantum effects in social interactions.)

I conveyed my vague sense of irritation to Peter Dodds and he replied,

i essentially agree---though on occasion a simple physics model could be said to genuinely capture some essence of whatever absurdly complicated phenomenon, such as cooperation. then it's okay, as long as the physicists involved proceed with some humility (which is of course extremely unlikely). on the other hand, insane notions of people behaving in a way the quantum mechanics might explain (or ising models, another classic) are truly riling. the wholesale transplant of a theory that makes sense for gluons to human behaviour is not good science. philip anderson's science paper of 1972 (i think it was 1972, `more is different') had the right idea i think. at every scale there are a set of locally-based rules that give rise to some collective behaviour at the next scale. and it may be that predicting the rules at the next level is extremely difficult, and they have to be taken from empirical observations.

the particular paper we're discussing on friday has some
outcomes that i thought you in particular would be interested in.
basically, the system they have evolves into two factions in most
cases and into three in relatively special cases. the big problem
i have with this model is that the mechanism doesn't make much sense.
technically, the model itself is extremely interesting and they have many excellent results but the basic set up is odd.


As I was walking down the street, a guy stuck his head out of one of those carts that sells coffee and bagels, and said to me, "Excuse me, sir." I turned around and he continued, "Tomorrow's Friday, right?" I said yes, and he continued, "Today's Thursday?" I confirmed this one too, and that seemed to satisfy him.

Social networks and voter turnout

| 3 Comments

Meredith Rolfe has a webpage with some interesting papers-in-progress on voter turnout and social networks--two important topics that are generally considered completely separately. (I guess one could say that the "research networks" of these two problem areas do not have much overlap.)

The separation of voting and social-network research bothers me. From one direction, it is traditional to study voting from a completely individualistic framework (as in much of the rational-choice literature) or else to work with extremely oversimplified "voter" models in which cellular automata go around changing each others' minds--that is not empirical political science. On the other side, social network research tends to fall short of trying to explain political behavior.

Rolfe's work is interesting in trying to connect these areas. I like her model of education and voter turnout (it's consistent, I think, with our model of rational voting).

Also, her paper on social networks and simulations was a helpful review. I'll have to interpret what's in this paper in light of our own work on estimating properties of social networks. It's a challenge to connect these network ideas to the some of the underlying political and social questions about clustering of attitudes.

Using base rate information?

| 4 Comments

Aleks points to this blog entry from "HedgeFundGuy" on bias in decision making. HedgeFundGuy passes on a report that finds that people's opinions are strongly biased by their political leanings, then he gives his take on the findings--he thinks that this so-called bias isn't really a problem, it's just evidence of reasonable Bayesian thinking.

I'll first copy out what HedgeFundGuy had to say (including his own copy of the report of the study), then give my take, which is slightly different than his.

Spatial statistics and voting

| No Comments

In political science, "spatial models" are usually metaphorical, with the "spatial" dimensions representing political ideology (left to right) or positions on issues such as war/peace or racial tolerance. But what about actual spatial dimensions, actual distances on the ground? In some sense, spatial models are used all the time in analyzing political data, since states, counties, Congressional districts, neighborhoods, and so forth are always (or nearly always) spatially contiguous. Along with these political structures, one can also add spatial information in the form of distances between units and then fit so-called geostatistical models.

Drew Thomas has done some work along these lines, fitting spatial-statistical modeling to vote data from the counties in Iowa. (See also a draft of his paper here). Much more could be done here, clearly, but this work might be of interest as a starting point for others who want to play with these sorts of models.

Recent Comments

  • Alan Mainwaring: I thought I understood this material, but now I am read more
  • Matt Leifer: I stumbled across your post via a google alert. I read more
  • Peter: Scott Aaronson wrote a long piece on this article, read more
  • lylebot: You might be interested in Scott Aaronson's blog, which is read more
  • Bill Jefferys: I see that I already mentioned this in another blog read more
  • Bill Jefferys: I missed Andrew's last comment, so this is very late. read more
  • Bill Jefferys: A big difference between astronomy and social sciences is that read more
  • John Mashey: Assuming that's the Monkey Cage, yes, I suspect this topic read more
  • D. Mayo: I don't know what you mean by regularization procedures ...or read more
  • Andrew Gelman: John: I followed your links. Perhaps I'll discuss that stuff read more
  • Andrew Gelman: Mayo: Indeed, frequentist statistics allows regularization procedures. But it is read more
  • D. Mayo: A quick note: frequentist statistics does not disallow probabilistic prior read more
  • John Mashey: My 5 seconds of Science fame was fun… although 1) read more
  • David W. Hogg: I would love to use experimental data, but my answer read more
  • Andrew Gelman: Dan: No, the problem is that the parameter itself has read more

About this Archive

This page is an archive of entries from February 2005 listed from newest to oldest.

January 2005 is the previous archive.

March 2005 is the next archive.

Find recent content on the main index or look in the archives to find all content.