March 2005 Archives

Susan writes:

I've started reading the piece you sent me on Seth. Very interesting stuff. I generally tend to think that one can get useful evidence from a wide variety of sources -- as long as one keeps in mind the nature of the limitations (and every data source has some kind of limitation!). Even anecdotes can generate important hypotheses. (Piaget's observations of his own babies are great examples of real insights obtained from close attention paid to a small number of children over time. Not that I agree with everything he says.) I understand the concerns about single-subject, non-blind, and/or uncontrolled studies, and wouldn't want to initiate a large-scale intervention on the basis of these data. But from the little bit I've read so far, it does sound like Seth's method might elicit really useful demonstrations, as well as generating hypotheses that are testable with more standard methods. But I also think it matters what type of evidence one is talking about -- e.g., one can fairly directly assess one's own mood or weight or sleep patterns, but one cannot introspect about speed of processing or effects of one's childhood on present behavior, or other such things.

So I clicked on the link on our webpage to Decision Science News, flipped through there and then on to his links . . . hmmm, a link to the psychologist Jon Baron, who studies thinking and decision making. . .

Stephen Coate (Dept. of Economics, Cornell) and Brian Knight (Dept. of Economics, Brown) wrote a paper, "Socially Optimal Redistricting," with a theoretical derivation of seats-votes curves. The paper cites some of my work with Gary King on empirically estimating seats-votes curves. Coate and Knight sent the paper to Gary, who forwarded it to me. It's an interesting paper but has a slight misrepresentation of what Gary and I did in studying seats-votes curves and redistricting.

A few years ago I picked up the book Virtual History: Alternatives and Counterfactuals, edited by Niall Ferguson. It's a book of essays by historians on possible alternative courses of history (what if Charles I had avoided the English civil war, what if there had been no American Revolution, what if Irish home rule had been established in 1912, ...).

There have been and continue to be other books of this sort (for example, What If: Eminent Historians Imagine What Might Have Been, edited by Robert Cowley), but what makes the Ferguson book different is that he (and most of the other authors in his book) are fairly rigorous in only considering possible actions that the relevant historical personalities were actually considering. In the words of Ferguson's introduction: "We shall consider as plausible or probable only those alternatives which we can show on the basis of contemporary evidence that contemporaries actually considered."

I like this idea because it is a potentially rigorous extension of the now-standard "Rubin model" of causal inference.

Postdoctoral position available

| 1 Comment

Postdoctoral research opportunity: Columbia University, Departments of Epidemiology and Statistics

Supervisors: Ezra Susser (epidemiology) and Andrew Gelman (statistics)

We have a NIH-funded postdoctoral position (1 or 2 years) available for what is essentially statistical research as applied to some important problems in psychiatric epidemiology. One project which we are working is the Jerusalem Perinatal Study of Schizophrenia, a birth cohort of about 90,000 (born 1966-1974) followed for schizophrenia in adulthood. Another project is a California birth cohort study of schizophrenia--this is a cohort of 20,000 collected in 1959-1966 for which we have ascertained/diagnosed 71 cases of schizophrenia spectrum disorders. The data set already exists and has produced several important findings. The statistical methods involve fitting and understanding multilevel models; see below. The position can also involve some teaching in the Statistics Department if desired.

Statistical Project 1: Tools for understanding and display of regressions and multilevel models

Modern statistical packages allow us to fit ever-more-complicated models, but there is a lag in the ability of applied researchers (and of statisticians!) to understand these models and check their fit to data. We are in the midst of developing several tools for summarizing regressions, generalized linear models, and multilevel models—these tools include graphical summaries of predictive comparisons, numerical summaries of average predictive comparisons, measures of explained variance (R-squared) and partial pooling, and analysis of variance. To move this work to the next stage we need to program the methods for general use (writing them as packages in the popular open-source statistical language R) and further develop them in the context of ongoing applied research projects.

Statistical Project 2: Deep interactions in multilevel regression

In regressions and generalized linear models, factors with large effects commonly have large interactions. But in a multilevel context in which factors can have many levels, this can imply many many potential interaction coefficients. How can these be estimated in a stable manner? We are exploring a doubly-hierarchical Bayes approach, in which the first level of the hierarchy is the usual units-within-groups (for example, patients within hospitals) in which coefficents are partially pooled and the second level is a hierarchical model of the variance components (so that the different amounts of partial pooling are themselves modeled). The goal is to be able to include a large number of predictors and interactions without the worry that lack-of-statistical-significance will make the estimates too noisy to be useful. We plan to develop these methods in the context of ongoing applied research projects.

If you are interested . . .

Please send a letter to Prof. Andrew Gelman (Dept of Statistics, Columbia University, New York, N.Y. 10027, gelman@stat.columbia.edu), along with c.v., copies of any relevant papers of yours, and three letters of recommendation.

Research, Google-style

| 10 Comments

In my correspondence with Boris about Barone's column about rich Democrats, I expressed surprise at Barone's statement that "Patriotism is equated with Hitlerism" (among leftists). Boris referred me to this article by Victor Davis Hanson which indeed has examples of leftists (and even moderate Democrats like John Glenn) comparing Bush to the Nazis.

But aren't the Democrats just following the lead of the Clinton-haters in the 1990s? Hansen says no:

The flood of the Hitler similes is also a sign of the extremism of the times. If there was an era when the extreme Right was more likely to slander a liberal as a communist than a leftist was to smear a conservative as a fascist, those days are long past. True, Bill Clinton brought the deductive haters out of the woodwork, but for all their cruel caricature, few compared him to a mass-murdering Mao or Stalin for his embrace of tax hikes and more government. “Slick Willie” was not quite “Adolf Hitler” or “Joseph Stalin.”

Hmmm . . . this got me curious, so I followed Hansen's tip and did some Google searches:

bush hitler: 1.5 million
clinton hitler: 0.7 million

What about some other comparisons?

bush god: 8.6 million
clinton god: 3.4 million

So Bush is both more loved and hated than Clinton, perhaps. But then again, there's been a huge growth in the internet in the past few years, so maybe more Bush than Clinton for purely topical reasons?

bush: 83 million
clinton: 25 million

Hmm, let's try something completely unrelated to politics:

bush giraffe: 180,000
clinton giraffe: 23,000

OK, maybe not a good comparison, since giraffes live in the bush. Let's try something that's associated with Clinton but not with Bush:

bush mcdonalds: 440,000
clinton mcdonalds: 200,000

At this point, I'm getting the clear impression that Bush is getting more hits than Clinton on just about everything! So no evidence here that he's being Hitlerized more than Clinton was. It looks like the big number for "bush hitler" is more of an artifact of the spread of the web. [Place disclaimers here about the use of Google as a very crude research tool!]

This certainly doesn't invalidate, or even argue against, Hansen's main points. It just suggests that we should be similarly concerned about haters on the other side.

OK, I guess that's enough on this topic . . . maybe a good example for statistics teaching, though? Googlefighting as data analysis? Perhaps Cynthia Dwork, David Madigan, or some other student of web rankings can come up with more sophisticated analyses.

P.S. Update with discussion here.

P.S. Much much more on this general topic here, and here.

Boris forwarded to me this article by Michael Barone on "the trustfunder left." Some excerpts:

Who are the trustfunders? People with enough money not to have to work for a living, or not to have to work very hard. . . . These people tend to be very liberal politically. Aware that they have done nothing to earn their money, they feel a certain sense of guilt. . . . they are citizens of the world with contempt for those who feel chills up their spines when they hear "The Star Spangled Banner." . . . Where can you find trustfunders? Not scattered randomly around the country, but heavily concentrated in certain areas. . . . Trustfunders stand out even more vividly when you look at the political map of the Rocky Mountain states. In Idaho and Wyoming, each state's wealthiest county was also the only county to vote for John Kerry . . . Massachusetts Catholics gave their fellow Massachusetts Catholic Kerry only 51 percent of their votes, but he won 77 percent in Boston, 85 percent in Cambridge, and 69 percent and 73 percent in trustfunder-heavy Hampshire and Berkshire Counties in the western mountains. . . .

Rich states and counties mostly support the Democrats, but rich voters mostly support the Republicans

This is vivid writing but, I think, incorrect electoral analysis. Barone is making the common error of "personifying" states and counties. Since 1996, and especially since 2000, rich states and rich counties have tended to support the Democrats--but rich voters have continued to support the Republicans.

For example, as David Park found looking through the exit polls, the 2004 election showed a consistent correlation between income and support for the Republicans, with Bush getting the support of 36% of voters with incomes below $15,000, 14% of those with incomes between $15-30,000, . . . and 62% of those with incomes above $200,000.

Given these statistics, I strongly doubt that trustfunders--in Barone's words, "people with enough money not to have to work for a living, or not to have to work very hard"--are mostly liberal, as he claims. Of course it's possible, but the data strongly support the statements that (a) richer people tend to support the Republicans, but (b) voters in richer states (and, to some extent, counties) tend to support Democrats. There definitely are differences between richer and poorer states--but the evidence is that, within any state, the richer voters tend to go for the Republicans. See here for more.

Confusion of the columnists

My first thought on seeing Barone's article was disappointment that the author of the Almanac of American Politics would write something so misinformed. However, other columnists have made the same mistake. For example, here's here's Nicholas Kristof in the New York Times.

The interesting thing is that the conceptual confusion between patterns among states and among individuals (sometimes called the "ecological fallacy" or "Simpson's paradox" in statistics) led Barone to confusion even at the state and county level. For example, he writes,

Where Democrats had a good year in 2004 they owed much to trustfunders. In Colorado, they captured a Senate and a House seat and both houses of the legislature. Their political base in that state is increasingly not the oppressed proletariat of Denver, but the trustfunder-heavy counties that contain Aspen (68 percent for Kerry), Telluride (72 percent) and Boulder (66 percent). . . .

I went and looked it up. Actually, Kerry got 70% of the vote in Denver.

What's going on?

How can Barone, an experienced observer who knows a lot more about voting patterns than I do, make this mistake--not recognizing that rich people are voting for Republicans and not even noticing that Kerry got 70% of the vote in Denver? I think the fundamental problem, both of conservatives like Barone and liberals on the other side, is not coming to grips with the basic fact that both parties have close to 50% support.

Perhaps the Democrats are the party of trustfunders, welfare cheats, drug addicts, communists, and whatever other categories of people you don't like. Perhaps the Republicans are the party of rich CEO's, bigots, fascists, and so forth. No matter how you slice it, both sides have to add up to 50%, so you either have to throw in a lot of "normal" voters on both sides or else you have to marginalize large chunks of the population.

For example, Barone notes that Kerry won only 51% of the Catholic votes in Massachusetts. That looks pretty bad--he's so unpopular that he barely got the support of voters of his own state and religion. But, hey, he got 48% of the vote national vote, so somebody was voting for him. And considering that Bush got 62% of the voters with incomes over $200,000, Kerry's voters can't all be trustfunders!

Barone might be right, however, when he cites the trustfunders as a new source of money for the Democrats (as they of course also are for the Republicans). And, as a political matter, it might very well be a bad thing if both political parties are being funded by people from the top of the income distribution. This would be an interesting thing to look at. There's a wide spectrum of political participation, ranging from voting, to campaign contributions, to activism (see Verba, Schlozman, and Brady), and the demographics of these contributors and activists is potentially important. But you're not going to find it by looking at state-level or county-level vote returns.

Reasoning by analogy?

I clicked through to the link on Barone's page to his book, "Hard and Soft America." This looks much more reasonable. I wonder if he caught on to something real with "Hard America, Soft America" and then too quickly generalized it to imply, "anyone I agree with is part of hard America, which I like" and "anyone I disagree with is part of Soft America, which I dislike."

It wouldn't be the first time that a smart person was led by ideology to overgeneralize.

P.S. See also here, here, and here, and here for various takes on Barone's article.

Question about causal inference

| 5 Comments

Judea Pearl (Dept of Computer Science, UCLA) spoke here Tuesday on "Inference with cause and effect." I think I understood the method he was describing but it left me with some questions about what were the method's hidden assumptions. Perhaps someone familiar with this approach can help me out here.

I'll work with a specific example from my one of my current research projects.

Decision Science News

| No Comments

Dan Goldstein, who runs the Center for Decision Sciences seminar at Columbia (along with Dave Krantz and Elke Weber) has a blog called Decision Science News.

I got a call from Joe Ax, a reporter at the (Westchester) Journal News because there had recently been two different tied elections in the county. (See here for some links.) He wanted my estimate of the probability of a tied election. Well, there were actually only about 1000 votes in each election, so the probability of a tie wasn't so low. . . . (For an expected-to-be-close election with n voters, i estimate Pr(tie) roughly as 5/n. This is based on, first, the assumption that there is a 1/2 probability of an even number of votes for the 2 candidates (otherwise you can't have a tie), and then on the assumption that the outcome is roughly equally-likely to be between 45% and 55% for either candidate. Thus 1/2 x 10/n = 5/n.)

I also mentioned that some people would calculate the probability based on coin flipping, but I don't like that because it asssumes that everyone's probability is 1/2 and that voters are independent, neither of which is true (and also the coin-flipping model doesn't come close to fitting actual election data).

Coin flips and babies

An hour or so later Joe called me back and said that he'd mentioned this to some people, and someone told him that he'd heard that actually heads are slightly more common than tails. What did I think of this? I replied that heads and tails are equally likely when a coin is flipped (although not necessarily when spun), but maybe his colleague was remembering the fact that births are more likely to be boys than girls.

P.S. Here's the Journal News article (featuring my probability calculations).

Bayes in China

| 1 Comment

Xiao-Li confirmed that they didn't like Bayes in China (or at least in Shanghai) when he was a student. He writes:

Yes, I do [remember], and it's no laughing matter then! What happened was that the notion of "prior" contradicted one of Mao's quotation "truth comes out of empirical/practical evidence" (my translation is not perfect, but you can get the essence) -- and anything contradicts what Mao said was banned!

Do any other Chinese statisticians have stories like this?

Lowess is great

| 5 Comments

One of the discussants in Brain and Behavioral Sciences of Seth Roberts's article on self-experimentation was by Martin Voracek and Maryanne Fisher. They had a bunch of negative things to say about self-experimentation, but as a statistician, I was struck by their concern about "the overuse of the loess procedure." I think lowess (or loess) is just wonderful, and I don't know that I've ever seen it overused.

Curious, I looked up "Martin Voracek" on the web and found an article about body measurements from the British Medical Journal. The title of the article promised "trend analysis" and I was wondering what statistical methods they used--something more sophisticated than lowess, perhaps?

They did have one figure, and here it is:

vorm2338.f1.gif

Voracek and Fisher, the critics of lowess, are fit straight lines to data to clearly nonlinear data! It's most obvious in their leftmost graph. Voracek and Fisher get full credit for showing scatterplots, but hey . . . they should try lowess next time! What's really funny in the graph are the little dotted lines indicating inferential uncertainty in the regression lines--all under the assumption of linearity, of course. (You can see enlarged versions of their graphs at this link.)

As usual, my own house has some glass-based construction and so it's probably not so wise of me to throw stones, but really! Not knowing about lowess is one thing, but knowing about it, then fitting a straight line to nonlinear data, then criticizing someone else for doing it right--that's a bit much.

Not just lowess

Just to be clear, when I say "lowess is great," I really mean "smoothing regression is great"--lowess, also splines, generalized additive models, and all the other things that Cleveland, Hastie, Tibshirani, etc., have developed. (One of the current challenges in Bayesian data analysis is to integrate such methods. Maybe David Dunson will figure it all out.)

bugs.R question

| 3 Comments

This one's just for the bugs.R users out there . . .

Seth Roberts is a professor of psychology at Berkeley who has used self-experimentation to generate and study hypotheses about sleep, mood, and nutrition. He wrote an article in Behavioral and Brain Sciences describing ten of his self-experiments. Some of his findings:

Seeing faces in the morning on television decreased mood in the evening and improved mood the next day . . . Standing 8 hours per day reduced early awakening and made sleep more restorative . . . Drinking unflavored fructose water caused a large weight loss that has lasted more than 1 year . . .

As Seth describes it, self-experimentation generates new hypotheses and is also an inexpensive way to test and modify them. One of the commenters, Sigrid Glenn, points out that this is particularly true with long-term series of measurements that it might be difficult to do on experimental volunteers.

Heated discussion

Behavioral and Brain Sciences is a journal of discussion papers, and this one had 13 commmenters and a response by Roberts. About half the commenters love the paper and half hate it. My favorite "hate it" comment is by David Booth, who writes, "Roberts can swap anecdotes with his readers for a very long time, but scientific understanding is not advanced until a literature-informed hypothesis is tested between or within groups in a fully controlled design shown to be double-blind." Tough talk, and controlled experiments are great (recall the example of the effects of estrogen therapy), but Booth is being far too restrictive. Useful hypotheses are not always "literature-informed," and lots has been learned scientifically by experiments without controls and blindness. This "NIH" model of science is fine but certainly is not all-encompassing (a point made in Cabanac's discussion of the Roberts paper).

The negative commenters were mostly upset by the lack of controls and blinding in self-experiments, whereas the positive commenters focused on individual variation, and the possibility of self-monitoring to establish effective treatments (for example, for smoking cessation) for individuals.

In his response, Roberts discusses the various ways in which self-experimentation fits into the landscape of scientific methods.

My comments

I liked the paper. I followed the usual strategy with discussion papers and read the commentary and the response first. This was all interesting, but then when I went back to read the paper I was really impressed, first by all the data (over 50 (that's right, 50) scatterplots of different data he had gathered), and second by the discussion and interpretation of his findings in the context of the literature in psychology, biology, and medicine.

The article has as much information as is in many books, and it could easily be expanded into a book ("Self-experimentation as a Way of Life"?). Anyway, reading the article and discussions led me to a few thoughts which maybe Seth or someone else could answer.

First, Seth's 10 experiments were pretty cool. But they took ten years to do. It seems that little happened for the first five years or so, but then there were some big successes. It would be helpful to know if he started doing something in last five years that made his methods more effective. If someone else wants to start self-experimenting, is there a way to skip over those five slow years?

Second, his results on depression and weight control, if they turn out to generalize to many others, are huge. What's the next step? Might there be a justification for relatively large controlled studies (for example, on 100 or 200 volunteers, randomly assigned to different treatments)? Even if the treatments are not yet perfected, I'd think that a successful controlled trial would be a big convincer which could lead to greater happiness for many people.

Third, as some of the commenters pointed out, good self-experimentation includes manipulations (that is, experimentation) but also careful and dense measurements--"self-surveillance". If I were to start self-experimentation, I might start with self-surveillance, partly because the results of passive measurements might themselves suggest ideas. All of us do some self-experimentation now and then (trying different diets, exercise regimens, work strategies, and soon). Where I suspect that we fall short is in the discipline of regular measurements for a long enough period of time.

Finally, what does this all say about how we should do science? How can self-experimentation and related semi-formal methods of scientific inquiry be integrated into the larger scientific enterprise? What is the point where researchers should jump to a larger controlled trial? Seth talks about the benefits of proceeding slowly and learning in detail, but if you have an idea that something might really work, there are benefits in learning more about it sooner.

P.S. Some of Seth's follow-up studies on volunteers are described here (for some reason, this document is not linked to from Seth's webpage, but it's referred to in his Behavioral and Brain Sciences article).

One of the major figures in Segerstrale's book is John Maynard Smith, who she refers to as "Maynard Smith." Shouldn't it be just "Smith"? Perhaps it's a British thing? When reading about 20th century English history, I always wondered why David Lloyd George was called "Lloyd George" rather than simply "George," but I figured that was just to avoid confusing him with the king of that name.

In the comments to this entry, Aleks points out that the correlations between scientific views and political ideology are not 100%, even at any particular point in time. (In my earlier entry, I had discussed how these political alignments have shifted over time.)

The question then arises: why care about this at all? Why not just evaluate the science on scientific grounds and ignore the ideology?

Yanan Fan, Steve Brooks, and I wrote a paper on using the score statistic to assess convergence of simulation output, which will appear in Journal of Computational and Graphical Statistics. The idea of the paper is to make use of certain identies involving the derivative of the logarithm of the target density. The paper introduces two convergence diagnostics. The first method uses the identity that the expected value of this derivative should be zero (if one is indeed drawing from the target distribution). The second method compares marginal densities estimated empirically from simulation draws to those estimated using path sampling. For both methods, multiple chains can be used to assess convergence using these methods, as we illustrate using some examples.

Well, now that I'm telling stories . . . When reading "Defenders of the Truth", I came across the name of Stephen Chorover--he was one of the left-wing anti-sociobiology people. As a freshman at MIT, I took introductory psychology (9.00, I believe it was), and Chorover was one of the two professors. He would give these really vague lectures--the only thing I remember was when he told us about his experiences with mescaline. He said something like, "I don't recommend that you take drugs, but the only way you'll know what it's like is to try it." Seemed like a real burned-out 60's type. (The course was co-taught, and the other prof was a young guy named Jeremy Wolfe, who was a dynamic lecturer but unfortunately spent all his time talking about perception, mostly vision, which might be interesting but certatinly wasn't why a college freshman is taking psychology.) The course also had a weekly evening meeting that was in a room too small for us all to fit in, because, they told us, "we know you won't show up anyway." Another great message to send to the freshmen . . .

(I really shouldn't go around mocking college instructors since I know I have my own flaws. In the first semester of teaching, one of the students came up to me at the end of the semester and said, "Don't worry, Prof. Gelman. You'll do a better job teaching next time.")

Anyway, it was just funny to see Chorover's name in print after so many years. Also, Steven Pinker gave a guest lecture in that intro psych class of ours, but that was before he became political.

Science and ideology

| 4 Comments

Writing about the changing nature of science and ideology (see also here) reminds me that in grad school, Joe Schafer used to talk about the "left-wing Bayesians" and the "right-wing frequentists," which might even have been true although I can't see any scientific reason for such an alignment. I mean, I can see a lot of rationalizations (for example, Bayesian inference was more of a new, maybe risky, approach, hence perhaps would be more popular with radicals than with conservatives), but they don't seem so convincing to me.

I also remember that Xiao-Li Meng told me that in China they didn't teach Bayesian statistics because the idea of a prior distribution was contrary to Communism (since the "prior" represented the overthrown traditions, I suppose). Or maybe he was pulling my leg, I dunno.

Contingency and ideology

| 8 Comments

Following Bob O'Hara's recommendation, I read Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond, by Ullica Segerstrale. As Bob noted in his comment, this is a story of a bunch of scientists who managed to have a highly ideological debate about evolutionary theory despite all being on the left side of the political spectrum (sort of like that famous scene from The Life of Brian with the Judean People's Front).

Nature vs. nurture, right vs. left

Anyway, I wanted to use this to continue the discussion of science and political ideology.

p (A|B) != p (B|A)

| 4 Comments

A common mistake in conditional probability is to confuse the conditioning (that is, to mistake p(A|B) for p(B|A)). One complication here is that our language for probability can be ambiguous. For example, I have done a classroom demo replicating the experiment of Kahneman and Tversky in which students guess "the percentage of African countries in the United Nations." I always thought this meant
100*(# African countries in U.N.)/(# countries in U.N.).
But some students thought this meant
100*(# African countries in U.N.)/(# countries in Africa).
So, to even ask the question clearly, I need to ask for "the percentage of countries in the U.N. that are in Africa," or something like that.

Anyway, I recently went to a talk by Maryanne Schretzman (Dept of Homeless Services, NYC), where an interesting example arose of the difference between p(A|B) and p(B|A). They're looking at new admissions to the shelter system, and a lot of them come are people who are released from jail. But the jail administrators aren't so interested in talking about this, because, of all the people released from jail, only a small percentage go to homeless shelters. p(A|B) is high, but p(B|A) is small. Same numerators, but the denominator is much bigger in the latter case.

Following up on this and this and this , Dan Ho sent me the following discussion of the differences between his, Jasjeet Sekhon's, and Ben Hansen's matching programs:

The secret weapon

| 2 Comments

An incredibly useful method is to fit a statistical model repeatedly on several different datasets and then display all these estimates together. For example, running a regression on data on each of 50 states (see here as discussed here), or running a regression on data for several years and plotting the estimated coefficients over time.

Here's another example:

figure8 (464 x 600).png

I was reading something the other day that referred in an offhand way to "meritocracy", which reminded me of a wide-ranging and interesting article by James Flynn (the discoverer of the "Flynn effect", the steady increase in average IQ scores over the past sixty years or so). Flynn's article talks about how we can understand variation in IQ within populations, between populations, and changes over time.

At the end of his article, Flynn gives a convincing argument that a meritocracatic future is not going to happen and in fact is not really possible.

EDA for HLM

| 2 Comments

Matching and matching

| 2 Comments

Recent Comments

  • noahpoah: I'm pretty sure MCMCglmm can fit hierarchical probit models. http://cran.r-project.org/web/packages/MCMCglmm/index.html read more
  • Kaiser: Great find. Judging from the direction of the distortion, one read more
  • Gustav: The MCMCglmm package? read more
  • Jules Papandrea: sounds like the wmd freaks of the early bush years read more
  • Kevin Canini: You didn't specify the context in which the chart appears, read more
  • Ivan Karamazov: Methinks the criteria are too narrow to describe good literature. read more
  • J.J. Hayes: The thing about 1984 is that it can be read read more
  • laura: Dear Professor Gelman, I went to your talk today! I read more
  • Tom V: @Muz: Yes, definitely The Dispossessed. It had a very similar read more
  • Marius Cobzarenco: I completely fail to see how "1984" is a left-wing read more
  • Nameless: The best way to justify the welfare state for a read more
  • Jeremy Miles: You need to come to Southern California to do a read more
  • ang: Prof. Gelman, enjoyed your talk yesterday. What do you make read more
  • Muz: The Dispossessed by Ursula Le Guin. It might be science read more
  • Dave Robinson: While a book but not necessarily a novel (as probably read more

About this Archive

This page is an archive of entries from March 2005 listed from newest to oldest.

February 2005 is the previous archive.

April 2005 is the next archive.

Find recent content on the main index or look in the archives to find all content.