October 2006 Archives

Here's a nice description of our project (headed by Lex van Geen) to use cell phones to help people in Bangladesh to lower ther arsenic exposure.

phone.jpg

Further background is here.

I came across this detailed article on problems with electronic voting systems by Jon Stokes: How to steal an election by hacking the vote (also available as PDF). Some of the academics working on detecting this problem are Walter Mebane and Jasjeet Sekhon. But of course, one can be subtler than this. The ACM (Association for Computing Machinery) is active in pointing out problems, and numerous articles can be found with a Google query.

voting.png

Model diagnostics

| No Comments

From the Bayes-News list, Alexander Geisler writes,

My letter to the New Yorker

| No Comments

Dear Editors,

Ian Frazier ("Snook," October 30th) writes, "you will find suprisingly often that people take up professions suggested by their last names." In an article called "Why Susie sells seashells by the seashore: implicit egotism and major life decisions," Brett Pelham, Matthew Mirenberg, and John Jones found some striking patterns. Just for example, there were 482 dentists in the United States named Dennis, as compared to only 260 that would be expected simply from the frequencies of Dennises and dentists in the population. On the other hand, the 222 "extra" Dennis dentists are only a very small fraction of the 620,000 Dennises in the country; this name pattern thus is striking but represents a small total effect. Some quick calculations suggest that approximately 1% of Americans' career choices are influenced by the sound of their first name.

Yours
Andrew Gelman

[not in the email] Here's the relevant link.

Joe Bafumi writes,

Dartmouth College seeks applicants for a post-doctoral fellowship in the area of applied statistics. Dartmouth is developing a working group in applied statistics, and the fellowship constitutes one part of this new initiative. The applied statistics fellow will be in residence at Dartmouth for the 2007-2008 academic year, will teach one 10-week introductory course in basic statistics, and will be expected to further his or her research agenda during time at Dartmouth. Research speciality is open but applicants should have sufficient inter-disciplinary interest so as to be willing to engage different fields of study that rely on quantitative techniques. The fellow will receive a competitive salary with a research account. Dartmouth is an EO/AA employer and the college encourages applications from women and minority candidates. Applications will be reviewed on a rolling basis. Applicants should send a letter of interest, two letters of recommendation, one writing sample, and a CV to Michael Herron, Department of Government, 6108 Silsby Hall, Hanover, NH 03755.

This looks interesting to me. I suggested to Joe that they also invite visitors to come for a few days at a time to become actively involved in the research projects going on at Dartmouth.

Jo-Anne Ting writes,

I'm from the Computational Learning and Motor Control lab at the University of Southern California. We are currently looking at a weighted linear regression model where the data has unequal variances (as described in your "Bayesian Data Analysis" book). We use EM to infer the parameters of the posterior distributions.

However, we have noticed that in the scenario where the data set consists of a large number of outliers that are irrelevant to the regression, the value of the posterior predictive variance would be affected by the number of outliers in the data set, since the posterior variance of the data is inversely proportional to the number of samples in the data set. It seems to me that logically, this should not be the case, since I would hope the amount of confidence associated with a prediction would not be decreased by the number of outliers in the data set.

Any insight you could share would be greatly appreciated regarding the effect of the number of samples in a data set on the confidence interval of a prediction in heteroscedastic regression.

My response: I'm not quite sure what's going on here, because I'm not quite sure what the unequal-variance model is that's being used. But if you have occasional outliers, then, yes, the predictive variance should be large, since the preidcitive variance represents uncertainty about individual predicted data points (which, from the evidence of the data so far, could indeed be "outliers"; i.e., far from the model's point prediction).

One way to get a handle on this would be to do some cross-validation. Cross-validation shouldn't be necessary if you fully understand and believe the model, but if you're still trying to figure things out it can be a helpful way to see if the predictions and predictive uncertainties make sense.

Michael Sobel is speaking Monday Here's the abstract:

During the past 20 years, social scientists using observational studies have generated a large and inconclusive literature on neighborhood effects. Recent workers have argued that estimates of neighborhood effects based on randomized studies of housing mobility, such as the “Moving to Opportunity Demonstration” (MTO), are more credible. These estimates are based on the implicit assumption of no interference between units, that is, a subject’s value on the response depends only on the treatment to which that subject is assigned, not on the treatment assignments of other subjects. For the MTO studies, this assumption is not reasonable. Although little work has been done on the definition and estimation of treatment effects when interference is present, interference is common in studies of neighborhood effects and in many other social settings, for example, schools and networks, and when data from such studies are analyzed under the “no interference assumption”, very misleading inferences can result. Further, the consequences of interference, for example, spillovers, should often be of great substantive interest, though little attention has been paid to this. Using the MTO demonstration as a concrete context, this paper develops a framework for causal inference when interference is present and defines a number of causal estimands of interest. The properties of the usual estimators of treatment effects, which are unbiased and/or consistent in randomized studies without interference, are also characterized. When interference is present, the difference between a treatment group mean and a control group mean (unadjusted or adjusted for covariates) does not estimate an average treatment effect, but rather the difference between two effects defined on two distinct subpopulations. This result is of great importance, for a researcher who fails to recognize this could easily infer that a treatment is beneficial when it is universally harmful.

Here's the paper. (Scroll past the first page which is blank.) See here for more on Sobel and causal inference. The talk is Mon noon in the stat dept.

The type of user interfaces we are used from data mining and machine learning are slowly appearing for the R environment. Today I found Rattle - a Gnome-based interface to R (via KDnuggets). While tools like Weka or Orange are still a generation ahead of Rattle, Rattle itself is quite a bit better for certain types of tasks, such as model evaluation, compared to the usual command line interface. Here's an example of ROC-based evaluation on various models on the same dataset:

rattle.png

One of the benefits of Rattle is that it also acts as a guide to various interesting R packages that I was not aware of earlier, and shows how they can be used: it generates a script created from the commands that were clicked graphically (something other systems could also do). The main downside of Rattle is that it only works with the latest version of R 2.4, but not with the earlier versions.

From Joe, here are the data that he used to predict House vote shares from pre-election polls in midterm elections for the Erikson, Bafumi, and Wlezien paper:

congpolls2.jpg

(See here for the big version of the graph.)

Advice for referees

| 6 Comments

Tyler Cowen has some tips here. I disagree with his point 2. I try to do all referee reports within 15 minutes of receiving them. On the other hand, it would probably be a disaster if all referees followed my approach. A diversity of individual strategies probably results in the best collective outcome. I'm often impressed by the elaborate referee reports given for my own articles. On the other hand, my reports are always on time, and my judgments are trustworthy (I think).

Also, my impression is that the referee process is more serious in economics than in other fields, so that might explain some of our differences in approach.

John Monahan and Dennis Boos pointed out that one of the key ideas in this paper by Sam Cook, Don Rubin, and myself also arose in two papers of theirs. They write,

My colleagues Joe Bafumi, Bob Erikson, and Christopher Wlezien just completed their statistical analysis of seat and vote swings. They write:

Via computer simulation based on statistical analysis of historical data, we show how generic vote polls can be used to forecast the election outcome. We convert the results of generic vote polls into a projection of the actual national vote for Congress and ultimately into the partisan division of seats in the House of Representatives. Our model allows both a point forecast—our expectation of the seat division between Republicans and Democrats—and an estimate of the probability of partisan control. Based on current generic ballot polls, we forecast an expected Democratic gain of 32 seats with Democratic control (a gain of 18 seats or more) a near certainty.

These conclusions seem reasonable to me, although I think they are a bit over-certain (see below).

Here's the full paper. Compared to our paper on the topic, the paper by Bafumi et al. goes further by predicting the average district vote from the polls. (We simply determine what is the vote needed by the Democrats to get aspecified numer of seats, without actually forecsasting the vote itself.) In any case, the two papers use similar methodology (although, again, with an additional step in the Bafumi et al. paper). In some aspects, their model is more sophisticated than ours (for example, they fit separate models to open seats and incumbent races).

Slightly over-certain?

The only criticism I'd make of this paper is that they might be understating the uncertainty in the seats-votes curve (that is, the mapping from votes to seats). The key point here is that they get district-by-district predictions (see equations 2 and 3 on page 7 of their paper) and then aggregate these up to estimate the national seat totals for the two parties. This aggregation does include uncertainty, but only of the sort that's independent across districts. In our validations (see section 3.2 of our paper), we found the out-of-sample predictive error of the seats-votes curve to be quite a bit higher than the internal measure of uncertainty obtained by aggregating district-level errors. We dealt with this by adding an extra variance term to the predictive seats-votes curve.

In summary

I like this paper, it seems reasonable, and I like how they do things in two steps: using the polls to predict the national swing and then using district-level information to estimate the seats-votes curve. I'd like to see the scatterplot that would accompany equation 1, and I think the election outcome (# of seats for each party) isn't quite so predictable as they claim, but these are minor quibbles. It goes beyond what we did, and all of this is certainly a big step beyond the usual approach of just taking the polls, not knowing what to do with them, and giving up!

Galton was a hero to most

| 3 Comments

In Graphics of Large Datasets: Visualizing a Million (about which more in a future entry), I saw the following graph reproduced from an 1869 book by Francis Galton, one of the fathers of applied statistics:

Genius.png

According to this graph [sorry it's hard to read: the words on the left say "100 per million above this line", "Line of average height", and "100 per millon above this line"; and on the right it says "Scale of feet"], one man in a million should be 9 feet tall! This didn't make sense to me: if there were about 10 million men in England in Galton's time, this would lead us to expect 10 nine-footers. As far as I know, this didn't happen, and I assume Galton would've realized this when he was making the graph.

From the long New York Times article on an example of scientific fraud by grant-hogging entrepreneurs-in-academia, imputation was used as an excuse for manipulating data to support the anticipated hypothesis:

Then, when pressed on how fictitious numbers found their way into the spreadsheet he’d given DeNino, Poehlman laid out his most elaborate explanation yet. He had imputed data — that is, he had derived predicted values for measurements using a complicated statistical model. His intention, he said, was to look at hypothetical outcomes that he would later compare to the actual results. He insisted that he never meant for DeNino to analyze the imputed values and had given him the spreadsheet by mistake. Although data can be imputed legitimately in some disciplines, it is generally frowned upon in clinical research, and this explanation came across as hollow and suspicious, especially since Poehlman appeared to have no idea how imputation was done.

The sentence was one year and one day in federal prison, followed by two years of probation.

Map of Springfield

| No Comments

Here's an amusing data visualization:

springfield.jpg

Here is the full-size map (pdf version here). Some more info:

While the placement of most locations is arbitrary, many are placed according to where they appear in relationship to each other in specific episodes of The Simpsons. In some cases 'one-time references' to specific locations have been disregarded in favor of others more often repeated. Due to the many inconsistencies among episodes, the map will never be completely accurate.

(Link from Information Aesthetics blog.)

After I spoke at Princeton on our studies of social polarization, John Londregan had a suggestion for using such questions to get more precise survey estimates. His idea was, instead of asking people, "Who do you support for President?" (for example), you would ask, "How many of your close friends support Bush?" and "How many of your close friends support Kerry?" You could then average these to get a measure of total support.

The short story is that such a measure could increase bias but decrease variance. Asking about your friends could give responses in error (we don't really always know what our friends think), and also there's the problem that "friends" are not a random sample of people (at the very least, we're learning about the more popular people, on average). On the other hand, asking the question this way increases the effective sample size, which could be relevant for estimating small areas. For example, in a national poll, you could try to get breakdowns by state and even congressional district.

It might be worth doing a study, asking questions in different ways and seeing what is gained and lost by asking about friends/acquaintances/whatever.

It's not easy being a Democrat. After their stunning loss of both houses of Congress in 1994, the Democrats have averaged over 50% of the vote in Congressional races in every year except 2002, yet they have not regained control of the House. The same is true with the Senate: in the last three elections (during which 100 senators were elected), Democratic candidates have earned three million more votes than Republican candidates, yet they are outnumbered by Republicans in the Senate as well. 2006 is looking better for the Democrats, but our calculations show that they need to average at least 52% of the vote (which is more than either party has received since 1992) to have an even chance of taking control of the House of Representatives.

Why are things so tough? Looking at the 2004 election, the Democrats won their victories with an average of 69% of the vote, while the Republicans averaged 65% in their contests, thus ``wasting'' fewer votes. The Republicans won 47 races with less than 60% of the vote; the Democrats only 28. Many Democrats are in districts where they win overwhelmingly, while many Republicans are winning the close races--with the benefit of incumbency and, in some cases, favorable redistricting.

105090_graph.png

The accompanying chart (larger version here) shows the Democrats' share of the Congressional vote over the past few decades, along with what we estimate they need to have a 10%, 50%, and 90% chance of winning the crucial 218 seats in the House of Representatives. We performed the calculation by constructing a model to predict the 2006 election from 2004, and then validating the method by applying it to previous elections (predicting 2004 from 2002, and so forth). We predict that the Democrats will need 49\% of the average vote to have a 10% chance, 52% of the vote to have an even chance, and 55% of the vote to have a 90% chance of winning the House. The Democrats might be able to do it, but it won't be easy.

See here for the full paper (by John Kastellec, Jamie Chandler, and myself)..

P.S. After we wrote this article (and the above summary), we were pointed to some related discussions by Paul Krugman (see links/discussions from Mark Thoma and Kevin Drum) and Eric Alterman. They do their calculations using uniform partisan swing whereas we allow for variation among districts in swings, but the general results are the same.

Leonard Lopate interviewed me today on local rado to talk about my paper, Rich State, Poor State, Red State, Blue State: What's the Matter with Connecticut (coauthored with Boris Shor, Joe Bafumi, and David Park). I've been a fan of Lopate for awhile (ever since hearing his interview with the long-distance swimmer Lynne Cox), but I have a much better sense of what makes him a good interviewer, now that I've been interviewed myself.

The trick is that he had a series of questions. If he had let me just ramble on for five minutes, I would've come off horribly, but I was able to answer the questions one at a time and explain things clearly. At the same time, the interview was live, and I didn't see the questions ahead of time, so things were spontaneous. (I only wish I had remembered to mention that we found similar patterns in Mexican elections.)

It was interesting to get this insight into interviewing techniques.

At the Deutsche Bank Group think tank, I have spotted Are elite universities losing their competitive edge? by Han Kim, Morse and Zingales. It's an interesting application of multilevel modeling. I won't write too much because the abstract is self-explanatory:


We study the location-specific component in research productivity of economics and finance faculty who have ever been affiliated with the top 25 universities in the last three decades. We find that there was a positive effect of being affiliated with an elite university in the 1970s; this effect weakened in the 1980s and disappeared in the 1990s. We decompose this university fixed effect and find that its decline is due to the reduced importance of physical access to productive research colleagues. We also find that salaries increased the most where the estimated externality dropped the most, consistent with the hypothesis that the de-localization of this externality makes it more difficult for universities to appropriate any rent. Our results shed some light on the potential effects of the internet revolution on knowledge-based industries.

Here is a plot of research output (measured in journal publications) given the number of post-PhD years:

research productivity.png

My main "complaint" against the paper is that "measuring" research productivity in terms of quantity or citation impact is asking for trouble: With Goodheart's law, it's very easy to optimize for number of publications (splitting research into the smallest publishable bit), citations (cite your friends and have your friends cite you), impact of the journals you publish in (polish the paper so that it glitters, and sprinkle it with impenetrable mathematical mystique). What really matters is stuff that people will read and be affected by it. Most of the good papers I have read in the past few years weren't published in an elite journal: I have read drafts, circulations, web pages. And most of the time I spent reading elite journals was a waste of time.

The sociology of sociology

| No Comments

I came across this letter by Jordan Scher from the London Review of Books a couple years ago:

Jenny Diski's portrait of Erving Goffman and her characterisation of the period from the late 1950s to the 1970s precisely captures the flavour of those fermentative days (LRB, 4 March). I came to know Goffman in the late 1950s when he and I were 'shaking the foundations' of, respectively, sociology and psychiatry at the National Institute of Mental Health in Bethesda, Maryland. We became competitive 'friends', if such were possible with this Cheshire-Cat-smiling porcupine.

Many of my experiences with Goffman revolved around Saturday night dinner parties. Always tinkering with the elements of personal interchange, Goffman frequently toyed with me regarding invitations to these parties. He would invite a young sociological student, Stewart Perry, with whom I shared an office, and his wife, a sociologist, to dinner proper. I would be invited, not to dinner, but as a post-prandial guest. Naturally, being as prickly as Goffman, but refusing to succumb to his baiting, I would politely decline. The slight must surely have delighted him.

Goffman was then 'outsourcing' himself at St Elizabeth's Hospital, beginning the research that eventually led to Asylums. At the same time, I was directing a ward of chronic schizophrenics at NIMH, developing treatment based on a structured programme of habilitation and rehabilitation. My maverick efforts provoked great controversy in the face of the prevailing psychoanalytic and 'permissive' orientation of the NIMH. I felt that Goffman and I shared a sort of intellectual kinship. Both of us viewed human behaviour as the ludic, or play-acting, presentation of self.

My last encounter with Goffman must have been during the final year of his life. We bumped into each other at a professional meeting, where he greeted me with a typical smiling riposte: 'I always thought I was going to hear much more of you! What happened?' 'How is your wife?' I asked. 'She killed herself,' he replied matter-of-factly. 'Finally escaped you,' I rejoined. (She had made several suicide attempts while we were at NIMH.)

Carrie asks:

If by any chance you're still teaching kids to do surveys, we have a project we could REALLY use help on. . . . we'd love to have a survey of ipod users asking them how many ipods they have owned, how often they used each of them, and how long they lasted before dying. We'd then like to crunch that data to find the likelihood of the ipod dying at given intervals.

Matt writes,

Tex and Jimmy sent me links to this study by Gilbert Burnham, Riyadh Lafta, Shannon Doocy, and Les Roberts estimating the death rate in Iraq in recent years. (See also here and here for other versions of the report). Here's the quick summary:

Between May and July, 2006, we did a national cross-sectional cluster sample survey of mortality in Iraq. 50 clusters were randomly selected from 16 Governorates, with every cluster consisting of 40 households. Information on deaths from these households was gathered. Three misattributed clusters were excluded from the final analysis; data from 1849 households that contained 12 801 individuals in 47 clusters was gathered. 1474 births and 629 deaths were reported during the observation period. Pre-invasion mortality rates were 5·5 per 1000 people per year (95% CI 4·3–7·1), compared with 13·3 per 1000 people per year (10·9–16·1) in the 40 months post-invasion. We estimate that as of July, 2006, there have been 654 965 (392 979–942 636) excess Iraqi deaths as a consequence of the war, which corresponds to 2·5% of the population in the study area. Of post-invasion deaths, 601 027 (426 369–793 663) were due to violence, the most common cause being gunfire.

And here's the key graph:

iraq.png

Well, they should really round these numbers to the nearest 50,000 or so, But that's not my point here. I wanted to bring up some issues related to survey sampling (a topic that is on my mind since I'm teaching it this semester):

Cluster sampling

The sampling is done by clusters. Given this, the basic method of analysis is to summarze each cluster by the number of people and the number of deaths (for each time period) and then treat the clusters as the units of analysis. The article says they use "robust variance estimation that took into account the correlation," but it's really simpler than that. Basically, the clusters are the units. With that in mind, I would've liked to have seen the data for the 50 clusters. Strictly speaking, this isn't necessary, but it would've fit in easily enough in the paper (or, certainly, in the technical report) and that would make it easy to replicate that part of the analysis.

Ratio estimation

I couldn't find in the paper the method that was used to extrapolate to the general population, but I assume it was ratio estimation (reporting deaths from 629/12801 = 4.9%, and if you then subtract the deaths before the invasion, and multiply by 12/42 (since they're counting 42 months after the invasion), I guess you get the 1.3% reported in the abstract). For pedagical purposes alone, I would've liked to see this mentioned as a ratio esitmate, (especially since this information goes into the standard error).

Inicidentally, the sampling procedure gives an estimate of the probability that each household in the sample is selected, and from this we should be able to get an estimate of the total popilation and total #births, and compare to other sources.

I also saw a concern that they would oversample large households, but I don't see why that would happen from the study design; also, the ratio estimation should fix any such problem, at least to first order. The low nonresponse numbers are encouraging if they are to be believed.

It's all over but the attributin'

On an unrelated note, I think it's funny for people to refer to this as the "Lancet study" (see, for example, here for some discussion and links). Yes, the study is in a top journal, and that means it passed a referee process, but it's the authors of the paper (Burnham et al.) who are responsible for it. Let's just say that I woldn't want my own research referred to as the "JASA study on toxivology" or the "Bayesian Analysis report on prior distributions" or the "AJPS study on incumbency advantage" or whatever.

Age and voting

| 4 Comments

There was a related article in the paper today (here's the link, thanks to John K.) so I thought I'd post these pictures again:

27-4.gif

27-2.gif

27-3.gif

See here for my thoughts at the time.

Thinking more statistically . . .

This is a paradigmatic age/time/cohort problem. We'd like to look at a bunch of these survey results over time, maybe also something longitudinal if it's available, then set up a model to estimate the age, time, and cohort patterns (recognizing, as always, that it's impossible to estimate all of these at once without some assumptions).

Juliet Eilperin wrote an article in the October Atlantic Monthly on the struggles of moderates running for reelection in Congress. She makes an error that's seductive enough that I want to go to the trouble of correcting it. Eilperen writes:

The most pressing issue in American politics this November shouldn’t be who’s going to win seats in the House of Representatives, but who’s most likely to lose them: moderates in swing districts. We’ve set up a system that rewards the most partisan representatives with all-but-lifetime tenure while forcing many of those who work toward legislative compromises to wage an endless, soul-sapping fight for political survival.

Thanks to today’s expertly drawn congressional districts, most lawmakers represent seats that are either overwhelmingly Republican or overwhelmingly Democratic. As long as House members appeal to their party’s base, they’re in okay shape—a strategy that has helped yield a 98 percent reelection rate on Capitol Hill.

She continues with lots of stories about how the moderates in Congress have to work hard for reelection, and how the system seems stacked against them.

But . . . this isn't quite right. Despite all the efforts of the gerrymanderers, there are a few marginal seats, some districts where the Dems and Reps both have a chance of winning. If you're a congressmember in one of these districts, well, yeah, you'll have to work for reelection. It doesn't come for free.

These marginal districts are often represented by moderates. But it's the composition of the district, not the moderation of the congressmember, that's making the elections close. If the congressmember suddenly became more of an extremist, he or she wouldn't suddenly get more votes--in fact, most likely they would lose votes by becoming more extreme (contrary to the implication of the last sentence in the above quotation).

In summary

Congressmembers running for reelection in marginal seats have to work hard, especially if their party seems likely to lose seats (as with the Democrats in 1994 and, possibly, the Republicans this year). But they're having close races (and possibly losing) because of where they are, not because of their moderate views. And, perhaps more to the point, what's the alternative? Eilperen has sympathy for these congressmembers, but if somebody has to worry about reelection or there'd never be any turnover in congress at all.

P.S. A perhaps more interesting point, not raised in the article, is why aren't there more successful primary election challengers in the non-marginal seats.

cool != beneficial

| 2 Comments

In a letter published in the latest New Yorker, Douglas Robertson writes,

James Surowiecki, in his column on sports betting, writes, "How much difference is there, after all, between betting on the future price of wheat . . . and betting on the performance of a baseball team?" (The Financial Page, September 25th). Future markets in products such as wheat allow famers and other producers to shield themselves from some financial risks, and thereby encourage the production of necessities. In this sense, the futures markets are more akin to homeowners' insurance or liability insurance than to gambling on sports. But there is no corresponding economic benefit to betting on sports; on the contrary, there are serious costs involved in protecting the sports activities from fixing and other corruptions that invariably accompany such gambling activity.

This is a good point. I enjoy gambling in semi-skill-based settings (poker, sports betting, election pools, etc.), and betting markets are cool, but it is useful to step back a bit and consider the larger economic benefits or risks arising from such markets.

More physicist-bashing

| 1 Comment

Drago Radev mentions "a discussion from a few years ago between a group of physicists in Italy (Benedetto et al.) and Joshua Goodman (a computer scientist at Microsoft Research)":

Benedetto et al. had published a paper (”Language Trees and Zipping“) in a good Physics journal (Physical Review Letters) in which they showed a compression-based method for identifying patterns in text and other sequences.

According to Goodman

“I first point out the inappropriateness of publishing a Letter
unrelated to physics. Next, I give experimental results showing that
the technique used in the Letter is 3 times worse and 17 times
slower than a simple baseline, Naive Bayes. And finally, I review
the literature, showing that the ideas of the Letter are not
novel. I conclude by suggesting that Physical Review Letters should
not publish Letters unrelated to physics.”

Benedetto et al’s rebuttal appeared in Arxiv.org

P.S. I think it's ok for me to make fun of physicists since I majored in physics in college and switched to statistics because physics was too hard for me.

Sweden is not Finland

| 5 Comments

I came across this:

While some Scandinavian countries are known to have high levels of suicide, many of them – including Sweden, Finland and Iceland – ranked in the top 10 for happiness. White believes that the suicide rates have more to do with the very dark winters in the region, rather than the quality of life.

Jouni's response:

Technically it's correct - "While *some* Scandnavian countries ... have high levels of suicide ... Sweden, Finland and Iceland ranked in the top 10 for happiness..."

That "some Scandinavian country" is Finland; Sweden (or Iceland - surprisingly) has roughly 1/2 the suicide rate of Finland.

Readings For the Layman

| 4 Comments

Paul Mason writes,

I have been trying to follow the Statistical Modeling, Causal Inference, and Social Science Blog. I have had a continuing interest in statistical testing as an ex-Economics major and follower of debates in the philosophy of science. But I am finding it heavy going. Could you point me to (or post) some material for the intelligent general reader.

I'd start with our own Teaching Statistics: A Bag of Tricks, which I think would be interesting to learners as well. And I have a soft spot for our new book on regression and multilevel modeling. But perhaps others have better suggestions?

Andrew Gelman has a blog

| 1 Comment

Politically committed research

| 8 Comments

I was talking with Seth about his and my visits to the economics department at George Mason University. One thing that struck me about the people I met there was that their research was strongly aligned with their political convictions (generally pro-market, anti-government).

I discussed some of this here in the context of my lunch conversation with Robin Hanson and others about alternatives to democracy and here in the context of Bryan Caplan's book on voting, but it comes up in other areas too; for example, Alex Tabarrok edited a book on private prisons. My point here is not to imply that Alex etc. are tailoring their research to their political beliefs but rather that, starting with these strong beliefs about government and the economy, they are drawn to research that either explores the implications or evaluates these beliefs.

Comparable lines of research, from the other direction politically, include the work of my colleagues in the Center for Family Demography and Public Policy on the 7th floor of my building here at Columbia. My impression is that these folks start with a belief in social intervention for the poor and do research in this area, measuring attitudes and outcomes and evaluating interventions. Again, I don't think they "cheat" in their research--rather, they work on problems that they consider important.

This all reminded me of something Gary King once said about our own research, which is that nobody could ever figure out our own political leanings by reading our papers. I'm not saying this to put ourselves above (or below) the researchers mentioned above--it's just an interesting distinction to me, of different styles of social science research. I mean, there's no reason I couldn't study privatized prisons or social-work interventions (and come to my own conclusion about either), it just hasn't really happened that way. (I've done some work on a couple of moderately politically-charged topics--the death penalty and city policing, but in neither case did I come into the project with strong views--these were just projects that people asked me to help out on.)

There's no competition here--there's room for politically committed and more dispassionate research--it's just interesting here to consider the distinction. (See here for more on the topic.) I think it takes a certain amount of focus and determination to pursue research on the topics that you consider to be the most politically important. I don't seem to really have this focus and so I end up working more on methodology or on topics that are interesting or seem helpful to somebody even if they aren't necessarily the world's most pressing problems.

My talk on redblue

| No Comments

I'll be speaking in Cambridge on 9 Oct for the Boston chapter of the American Statistical Association. Here's the info.

Neal writes,

In your entry on mutlilevel modeling, you note that de Leeuw was "pretty critical of Bayesian multilevel modeling" In your paper, you say "compared with classical regression, multilevel model is almost an improvement, but to varying degrees."

So my question to you is: other than issues of computations, and perhaps not jumping linguistic hoops, what is the relevance of the Bayesian modifier of multilevel modeling? Would the issues be any different for classical mixed effects modeling?

My response: the Bayesian version averages over uncertainty in the variance parameters. This is particularly important when the number of groups is small, or the model is complicated, and when the actual group-level variance is small, in which case it can get lost in the noise.

Also, we discuss some of this in Sections 11.5 in our book. I hope we made the above point somewhere in the book, but I'm not sure that we remembered to put it in. The point is made most clearly (to me) in the 8-schools example, which is in Chapter 5 of Bayesian Data Analysis and comes from an article by Don Rubin from 1981.

I think I should attend this talk (see below) by the renowned Vladimir Vapnik, but once again the language of computer science leaves me baffled:

Neal writes,

Thanks for bringing up the most interesting piece by Gerber and Malhotra and the Drum comment.

My own take is perhaps a bit less sinister but more worrisome than Drum's interpretation of the results. The issue is how "tweaking" is interpreted. Imagine a preliminary analysis which shows a key variable to have a standard error as large as its coefficient (in a regression). Many people would simply stop analysis at that point. Now consider getting a coefficient one and a half times its standard error (or 1.6 times its standard error). We all know it is not hard at that point to try a few different specifications and find one that gives a magic p-value just under .05 and hence earning the magic star. But of course the magic star seems critical for publication.

Thus I think the problem is with journal editors and reviewers who love that magic star. And hence to authors who think that it matters whether t is 1.64 or 1.65. Journal editors could (and should) correct this.

When Political Analysis went quarterly we got it about a third right. Our instructions are:

"In most cases, the uncertainty of numerical estimates is better conveyed by confidence intervals or standard errors (or complete likelihood functions or posterior distributions), rather than by hypothesis tests and p-values. However, for those authors who wish to report "statistical significance," statistics with probability levels of less than .001, .01, and .05 may be flagged with 3, 2, and 1 asterisks, respectively, with notes that they are significant at the given levels. Exact probability values may always be given. Political Analysis follows the conventional usage that the unmodified term "significant" implies statistical significance at the 5% level. Authors should not depart from this convention without good reason and without clearly indicating to readers the departure from convention."

Would that I had had the guts to drop "In most cases" and stop after the first sentence. And even better would have been to simply demand a confidence interval.

Most (of the few) people I talk with have no difficulty distinguishing "insignificant" from "equals zero," but Jeff Gill in his "The Insignificance of Null Hypothesis Significance Testing" (Political Research Quarterly, 1999) has a lot of examples showing I do not talk with a random sample of political scientists. Has the world improved since 1999?

BTW, since you know my obsession with what Bayes can or cannot do to improve life, this whole issue, is in my mind, the big win for Bayesians. Anything that lets people not get excited or depressed depending on whether a CI (er HPD credible region) is (-.01,1.99) or (.01,2.01) has to be good.

My take on this: I basically agree. In many fields, you need that statistical significance--even if you have to try lots of tests to find it.

No kidding

| 1 Comment

As advertised, this really does seem like the most boring blogger. On the upside, he's probably not updating it during working hours.

Silly stuff

| 7 Comments

Jeronimo and Aleks sent me these:

sin_1.jpg

expanded_1.jpg

findx_1.jpg

P.S. Actually, I don't like the first one above because it's so obviously fake. I mean, they might all be fake, but it's clear that nobody would ever be given a question of the form, "1/n sin x = ?". It just doesn't mean anything. Somebody must have come up with the "six" idea and then worked backwards to get the joke.

The third one also looks fake, in that who would ever be given something so simple as 3,4,5. But who knows....

P.S. Corey Yanofsky sent this:

limit.png

Privacy vs Transparency

| No Comments

I was very entertained by ACLU's Pizza animation, demonstrating the fears of privacy advocates. On the other side, there are voices that the transparent society might not be such a bad idea.

I have come across Vapnik vs Bayesian Machine Learning - a set of notes by the philosopher of science David Corfield. I agree with his notes, and find them quite balanced, although they are not necessarily easy reading. My personal view is that SLT derives from attempts to mathematically characterize the properties of a model, whereas the Bayesian approach instead works by molding and adapting within a malleable language of models. Bayesians have a lot more flexibility with respect to what models they can create, relying on flexible general-purpose tools: having a vague posterior is often a benefit, but a computational burden. On the other hand, SLT users focus on fitting the equivalent of a MAP, being a bit haphazard about the regularization (the equivalent of a prior), but benefitting from modern optimization techniques.

Over the past few years I have enjoyed communicating with several philosophers of science, including, for example, Malcolm Forster. The philosophers attempt to read and understand the work of several research streams in the same line, and make sense of them. On the other hand, research streams take less time to understand each other and more time to perform guerilla warfare operations during anonymous paper reviews.

Here's the listing for the Family Demography and Public Policy Seminar this semester:

Within-group sample sizes

| No Comments

Georgia asks:

Michael Papenfus writes, regarding this article on multilevel modeling,

I [Papenfus] am currently working on trying to better understand the assumptions underlying the random effects (both varying intercepts and varying slopes) in hierarchical models. My question is: are there any hierarchical modeling techniques which allow one to include regressors which are correlated with the random effects or is this situation an example of what these models cannot do?

My short answer is that, when an individual-level predictor x is correlated with the group coefficients (I prefer to avoid the term "random effects"), you can include the group-level average of x as a group-level predictor. See here for Joe's entry on this topic, along with a link to our paper on the subject. (We also briefly discuss this issue in Section 21.7 of our new book.)

In a comment to this entry on Gardner and Oswald's finding that people who won between £1000 and £120,000 in the lottery were happier than people in two control groups, Tony Vallencourt writes,

Daniel Kahneman, Alan Krueger, David Schkade, Norbert Schwarz, and Arthur Stone disagree with this result. It's funny, yesterday, I came across this post and then across Kahneman et al's result in Tuesday Morning Quarterback on ESPN's Page 2.

I [Vallencourt] wrote it up on my blog. I'm not sure who I believe, but I know that I'd like to have more money myself.

OneI possibility is that regular $ (which you have to work for) isn't such a thrill, but the unexpected $ of the lottery is better.

I actually wonder about the £1000 lottery gains, though, since I suppose that many (most?) of these "winners" end up losing more than £1000 anyway from repeated lottery playing. Even the £120,000 winners might gamble much or all of it away.

Regarding unexpected $, I have the opposite problem: book royalties are always unexpected to me (even though I get them every 6 months!). I've always felt that a little mental accounting would do me some good--I'd like to imagine these royalties as something I could spend on some special treat--but, bound as I am to mathematical rules of rationality, I just end up putting these little checks into the bank and I never see them again. "Mental accounting is said to be a cognitive illusion but here it might be nice. Perhaps I could think of these royalties as poker winnings?

And, yes, I too would prefer to have more money--but I don't know that it would make me happier. Or maybe I should say, I don't know whether money would make me happier, but I'd still like to have more of it. I naively think that, if I had the choice between happiness state X, or happiness state X plus $1000 (i.e., I'm assuming that the $1000 doesn't make me any happier), I'd still like to have the extra $. But maybe I'm missing the point. And, of course, as the Tuesday Morning Quarterback points out, extra money doesn't usually come for free--you have to work for it, which takes time away from other pursuits.

So maybe this is really a problem of causal inference. Or, to put it in a regression context, what variables should we hold constant when considering different values the "money" input variable? Do we control for hours worked or not? Different versions of the "treatment" of money could have different effects, which brings us back to the point at the beginning of this note.

Recent Comments

  • Ben: Work around for savePlot error is to create a window read more
  • Wayne: I like RStudio's promise, but haven't found it superior to read more
  • lark: We are middle class (90K/ year) and we are sending read more
  • BP: I think the point is that Harvard should focus more read more
  • Harvard: If Harvard wants to help the average American, it read more
  • Alex Reutter: The "problem" is that it's "bad" for the Ivy+ schools read more
  • Phil: On the one hand, I do think that posts like read more
  • Dikran Karagueuzian: Many thanks for the thoughtful suggestions to all who responded. read more
  • Ian Fellows: JJ and the RStudio team have done a great job. read more
  • K? O'Rourke: Believe the broken link has been fixed. (I do recall read more
  • Ben: Did anyone get "savePlot" function to work? there may be read more
  • Jeff Witmer: I learned about RStudio this winter and have recently started read more
  • John: That quartz window in R.app doesn't overwrite new graphics. They're read more
  • bxg: I suppose I've driven the "flak" in this thread and read more
  • DKB @ NYU: Defense should be red. read more

About this Archive

This page is an archive of entries from October 2006 listed from newest to oldest.

September 2006 is the previous archive.

November 2006 is the next archive.

Find recent content on the main index or look in the archives to find all content.