July 2005 Archives

N is never large

| No Comments

Sample sizes are never large. If N is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once N is "large enough," you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc etc). N is never enough because if it were "enough" you'd already be on to the next problem for which you need more data.

Similarly, you never have quite enough money. But that's another story.

Reasons for randomization

| No Comments

I was at the UCLA statistics preprint site, which is full of interesting papers--we should so something like that here at Columbia--and came across this paper by Richard Berk on randomized experiments.

From the abstract to Berk's paper:

The Evolution of Cooperation, by Axelrod (1984), is a highly influential study that identifies the benefits of cooperative strategies in the iterated prisoner’s dilemma. We argue that the most extensive historical analysis in the book, a study of cooperative behavior in First World War trenches, is in error. Contrary to Axelrod’s claims, the soldiers in the Western Front were not generally in a prisoner’s dilemma (iterated or otherwise), and their cooperative behavior can be explained much more parsimoniously as immediately reducing their risks. We discuss the political implications of this misapplication of game theory.

Here's the paper.

In short: yes, the Prisoner's Dilemma is important; yes, Axelrod's book is fascinating; but no, the particular example he studied, of soldiers not shooting at each other in the Western Front in World War I, does not seem to be a Prisoner's Dilemma. I have no special knowledge of World War I; I base my claims on the same secondary source that Axelrod used. Basically, it was safer for soldiers to "cooperate" (i.e., not shoot), and their commanders had to manipulate the situation to get them to shoot. Not at all the Prisoner's Dilemma situation where shooting produced immediate gains.

In a way, this is merely a historical footnote; but it's interesting to me because of the nature of the explanations, Axelrod's eagerness to apply the inappropriate (as I see it) model to the situation, and others' willingness to accept that explanation. I think the idea that cooperation can "evolve"--even in a wartime setting--is a happy story that people like to hear, even when it's a poor description of the facts.

Other takes

Here are a bunch of positive reviews of Axelrod's book, and here's an article by Ken Binmore critizing Axelrod's work on technical grounds.

Teaching Example

| 2 Comments

There was a fun little article in the New York Times a while back (unfortunately I can't find it now and am missing some of the numbers, but the main idea still holds) about income differences across New York City's five boroughs. Apparently the mean income in the Bronx is higher than in Brooklyn, even though Brooklyn has a smaller proportion of residents below the poverty line, higher percentage of homeowners, and lower unemployment. Why is income higher in the Bronx, then? The reason, according to the article, is the New York Yankees--Yankees' salaries are so high that they make the whole borough look richer than it is.

(I'm not sure exactly how these income figures were calculated, since most of the Yankees probably don't actually live in the Bronx, but let's ignore that.) Obviously one should be comparing medians rather than means, which is where the teaching example comes in. I told my regression students this story last semester and someone asked about Queens, but I don't think the Mets' payroll even comes close to that of the Yankees (who, by the way, are a game behind the Red Sox).

Lying with Statistics

| 1 Comment

I hope I'm not just contributing to the gossip mill, but the latest post on the Freakonomics blog is kind scary.

Terrorism and Statistics

| 6 Comments

There was an interesting editorial in Sunday's New York Times about the anxiety produced by terrorism and people's general inability to deal rationally with said anxiety. All kinds of interesting stuff that I didn't know or hadn't thought about. Nassim Nicholas Taleb, a professor at UMass Amherst, writes that risk avoidance is governed mainly by emotion rather than reason, and our emotional systems tend to work in the short term: fight or flight; not fight, flight, or look at the evidence and make an informed decision based on the likely outcomes of various choices. Dr. Taleb points out that Osama bin Laden "continued killing Americans and Western Europeans in the aftermath of Sept. 11": People flew less and drove more, and the risk of death in an automobile is higher than the risk in an airplane. If you're afraid of an airplane hijacking, though, you're probably not thinking that way. It would be interesting to do a causal analysis of the effect of the September 11 terrorist attacks on automobile deaths (maybe someone already has?).

3 Books

| 2 Comments

One of the more memorable questions I was asked when on the job market last year was "If you were stranded on a deserted island with only three statistics books, what would they be?". (I'm not making this up.) If I were actually in that incredibly unlikely and bizarre situation, the best thing would probably be to just choose the three biggest books out there, in case I needed them for a fire or something. I'm pretty sure there's no tenure clock on deserted islands. But I digress. What I said was:

1. Gelman, Carlin, Stern, and Rubin (Bayesian Data Analysis)
2. The Rice Book (Mathematical Statistics and Data Analysis, John Rice)
3. Muttered something about maybe a survey sampling book and quickly changed the subject.

If anyone ever asks me that again, I think I'll change Number 3 to Cox's The Planning of Experiments.

Pet Peeve

| 2 Comments

I was reading an article in the newspaper the other day (I think it was about Medicare fraud in New York state, but it doesn't really matter) that presented some sort of result obtained from a "computer analysis." A computer analysis? Regression analysis, even statistical or economic analysis, would give at least some vague notion of what was done, but the term computer analysis is about as uninformative as saying that the analysis was done inside an office building. It's sort of like saying you analyzed data using Gibbs sampling, as opposed to saying what the model was that the Gibbs sampler was used to fit. Not untrue, but pretty uninformative.

A cool way to summarize a basketball player's contribution to his team is the plus-minus statistic or "Roland rating," which is "the difference in how the team plays with the player on court versus performance with the player off court."

I had heard of this somewhere and found the details searching on the web, via this page of links on basketball statistics, which led me to Kevin Pelton's statistical analysis primer, which led me to this page by Dan Rosenbaum. I had been wondering if the plus-minus statistic could be improved by adjusting for the qualities of the other players and teams on the court, and Rosenbaum has done just that.

I have a further thought, which is to apply a multilevel model to be able to handle thinner slices of the data. The issue is that, unless sample sizes are huge, the adjustments for the abilities of other players and other teams will be noisy. Multilevel modeling should help smooth out this variation and allow for adjusting for more factors, or for adjusting for the same factors for shorter time intervals. Sort of like the work of Val Johnson on adjusting grade point averages for the different abilities of students taking different courses.

Jeff Fagan forwarded this article on gun violence by Jeffrey Bingenheimer, Robert Brennan, and Felton Earls. The research looks at children in Chicago who were exposed to gun violence, and uses propensity score matching to find a similar group who were unexposed. Their key finding: "Results indicate that exposure to firearm violence approximately doubles the probability that an adolescent will perpetrate serious violence over the subsequent 2 years."

I'll first give a news report summarizing the article, then my preliminary thoughts.

Here's the summary:

Bryan Caplan writes about a cool paper from 1999 by Philip Tetlock on overconfidence in historical predictions. Here's Caplan's summary:

Tetlock's piece explores the overconfidence of foreign policy experts on both historical "what-ifs" ("Would the Bolshevik takeover have been averted if World War I had not happened?") and actual predictions ("The Soviet Union will collapse by 1993.") The highlights:

# Liberals believe that relatively minor events could have made the Soviet Union a lot better; conservatives believe that relatively minor events could have made South Africa a lot better.

# Tetlock asked experts how they would react if a research team announced the discovery of new evidence. He randomly varied the slant of the evidence. He found a "pervasiveness of double standards: experts switched on the high-intensity search light of skepticism only for dissonant results."

# Tetlock began collecting data on foreign policy experts' predictions back in the 80's. For example, in 1988 he asked Sovietologists whether the USSR would still be around in 1993. Overall, experts who said they were 80% or more certain were in fact right only 45% of the time.

# How did experts cope with their failed predictions? "[F]orecasters who had greater reason to be surprised by subsequent events managed to retain nearly as much confidence in the fundamental soundness of their judgments of political causality as forecasters who had less reason to be surprised." The experts who made mistakes often announced that it didn't matter because prediction is pretty much impossible anyway (but then why did they assign high probabilities in the first place?!) The mistaken experts also often said they were "almost right" (e.g. the coup against Gorbachev could have saved Communism) but correct experts very rarely conceded that they were "almost wrong" for similar reasons.

Caplan goes on to discuss the probability that forecasters might have been more calibrated if they had been betting money on their predictions. This is an interesting point but I'd like to take the discussion in a different direction. Beyond the general interest in cognitive illusions I've had since reading the Kahneman, Slovic, and Tversky book way back when, Tetlock's study interests me because it interacts with Niall Ferguson's work on potential outcomes in historical studies and Joe Bafumi's work on the stubborn American voter.

Virtual history and stubborn voters

Ferguson edited a book on "virtual history" in which he considered historical speculations, and retroactive historical speculations, in the potential-outcome framework that is used in statistical inference. These ideas also come up in other fields, such as law (as pointed out here by Don Rubin). I'm not quite sure how overconfidence fits in here but it seems relevant.

Joe Bafumi in the "stubborn American voter" (here's an old link; I don't have a link to the updated version of the paper) found that in the past twenty years or so, Americans have become more partisanl, not only in their opinions, but also in their views on factual matters. This seems similar to what Tetlock found and also suggests that the time dimension is relevant. Joe also considers views of elites vs. average Americans.

Finally . . .

Tetlock's paper was great but I'd like it even better if the results were presented as graphs rather than tables of numbers. In my experience, graphical presentations make results clearer, but even more important, can generate new hypotheses and reject existing hypotheses I didn't realize I had.

My impression is that statistics and data analysts see graphics as an "exploratory" tool for looking at data, maybe useful when selecting a model, but then when they get their real results, they present the numbers. But in my conception of exploratory data analysis (see also here for Andreas Buja's comment and here for my rejoinder), graphs are about comparisons. And, as is clear from Caplan's summary, Tetlock's paper is all about comparisons--stated probabilities compared to actual probabilities, liberals compared to conservatives, and so on. So I think something useful could possibly be learned by re-expressing Tetlock's Tables 1, 2, 3, and 4 as graphs. (Perhaps a good term project for a student in my regression and multilvel modeling class this fall?)

From Mahalanobis, a link to a story following up on medical research findings. From the CNN.com article:

New research highlights a frustrating fact about science: What was good for you yesterday frequently will turn out to be not so great tomorrow.

The sobering conclusion came in a review of major studies published in three influential medical journals between 1990 and 2003, including 45 highly publicized studies that initially claimed a drug or other treatment worked.

Subsequent research contradicted results of seven studies -- 16 percent -- and reported weaker results for seven others, an additional 16 percent.

That means nearly one-third of the original results did not hold up, according to the report in Wednesday's Journal of the American Medical Association.

This is interesting, but I'd like to hear more. If we think of effects as being continuous, then I'd expect that "subsequent research" would find stronger results half the time, and weaker results the other half the time. I imagine their dividing line relates to statistical significance, but that criterion can be misleading when making comparisons.

I'm not saying there's anything wrong with this JAMA article, just that I'd like to see more to understand what exactly they found. They do mention as an example the notorious post-menapausal hormone study.

P.S. For the name-fans out there, the study is by "Dr. John Ioannidis, a researcher at the University of Ioannina." I wonder if having the name helped him get the job.

Here's another one from Chance News:

Red enhances human performance in contests

Chance News points us toward this list of statistical cliches in baseball:

New York Times, April 3, 2005, Section 8, Pg 10 Alan Schwarz

The author writes: with statistics courtesy of Stats Inc., the following is a user's guide to the facts behind seven statistical cliches. We [Chance News, that is] have included excerpts from his explanation and recommend reading his complete discussions.

(1) HAS A 75-6 RECORD WHEN LEADING AFTER EIGHT INNINGS

Teams leading after eight innings last year won about 95 percent of the time (translating to a 77-4 record in 81 games); that 75-6 record would be two full games worse than average. Even after seven innings, teams with leads typically win 90.1 percent of the time.

(2) HOLDS LEFTIES TO A .248 AVERAGE.

Middle relievers have become ever more important in baseball, particularly left-handed specialists who jog in to face only one or two left-handed hitters. Last year, left-handed middle relievers held fellow lefties to a .249 collective average, 18 points lower than the major league-wide .267 average in all other situations. Someone yielding a .248 average sounds good but is merely doing his job.

(3) HAS HIT 9 OF HIS LAST 12 GAMES

Last year, each game's starting position players finished with at least one hit 67.1 percent of the time. So across any 12-game stretch, simple randomness will have almost half of them hitting safely in eight or nine games. More than half will wind up with hits in eight or more.

(4) HAS 31 SAVES IN 38 OPPORTNITIES

Relievers who were considered closers converted saves 84.8 percent of the time last season -- 32 times for every 38 chances.

(5) HAS STOLEN 19 BASES IN 27 ATTEMPTS (70%)

Players batting first and second in their lineups, usually speedy table-setters, stole bases 73.7 percent of the time last season.

(6) LEADS N.L. ROOKIES WITH A .287 AVERAGE

Interesting, perhaps, but most people do not realize how few rookies play enough to be considered for this type of list. Last year, six rookies reached the standard cutoff of 502 plate appearances to qualify for the batting title.

(7) HITS .342 ON THE FIRST PITCH

The stat line many people use to make these claims reads on 0-0 counts What people do not realize is that on 0-0 counts includes only at-bats that end on the first pitch; in other words, the hitter put the ball in play. Removing every time a hitter swings through a pitch or fouls it off will make anyone look good.

I've seen some of these before but this presentation (by Alan Schwarz, edited by Chance News) is particularly crisp. I like how they don't just mock the "cliches"; they actually provide some data.

30 stories in 30 days

| 4 Comments

Andrea Siegel sent me this awhile ago--some stories about her experiences working in a chain bookstore in NYC. My favorites are #6, #11, #16, #17, and #24, but there's a pleasant total-immersion feeling from reading all of them.

My first 30 days at a mid-Manhattan bookstore
(c) 1999 Andrea Siegel. All rights reserved.

Commenting on my thoughts about decision analysis and Schroedinger's cat (see here for my clarifications), Dave Krantz writes,

I'd first like to comment on the cat example, and then turn to the relationship to probabilistic modelling of choice.

I think one can gain clarity by thinking about simpler analogs to Schroedinger's cat. Instead of poison gas being released, killing the cat, let's suppose that a single radioactive decay just releases one molecule of hydrogen (H2) into an otherwise empty (hard vacuum) cat box. Now an H2 molecule is something that, in principle, one can describe pretty well by a rather complicated wave function. The wave function for an H2 molecule confined to a small volume, however, is different from the wave function for an H2 molecule confined to a much larger cat box. At any point in time, our best description (vis-a-vis potential measurements we could make that would interact with the H2 molecule) is a superposition of these two wave functions, narrowly or broadly confined. As long as we don't know whether the radioactive decay has taken place, and we make no observation that directly or indirectly interacts with the H2 molecule, the superposition continues to be the best physical model.

This example points up the fact that Schroedinger's cat involves two different puzzles. The first is epistemological: we are used to thinking of a cat as alive or dead, but equally used to thinking of a H2 molecule as confined narrowly or broadly. How can it be both? But this way of thinking just won't work in QM. The point of the double-slit experiments is to show clearly that an unobserved photon does NOT go through one slit or the other, it goes through both, in the sense of its wave function giving rise to coherent circularly symmetric waves emanating from each slit and interfering. It is equally wrong to think that a H2 molecule is either confined narrowly or broadly. Observations are going to be accounted for by assuming a superposition.

The second puzzle arises because a cat cannot in practice be described by a single wave function at all. That's at least true of an ordinary cat, subject to many sorts of observation. But in practice, even an unobserved cat is not describable by a wave function. There are wave functions for each molecule, but the best descriptions do not collapse these into a single wave function. Coherence fails. To take an analogy, one can get monochromatic light by passing a beam through an interference filter; though the frequencies of the different photons are all alike, the phases still vary randomly. This is very different from the coherent light of a laser, where everything is in phase.

There is a real problem of understanding when incoherent wave functions collapse into a single coherent one. This has been dramatized, in recent years, by studies of Bose-Einstein condensates. Rubidium atoms can be very near one another, yet still incoherent; but at low temperatures, they become a single molecular system, with a condensed wave function. The study of conditions for coherence is on-going, as I understand it. A cat is outside the boundaries of coherence.

Epistemologically, the introduction of probabilities as fundamental terms in choice modelling is rather analogous to the introduction of probabilities in QM measurement. It has always struck me as curious that the two happened in the same year, 1927: Born developed the probabilistic interpretation of QM measurement and Thurstone formulated the law of comparative judgment.

Where the analogy breaks down, however, is that there isn't any analog to a wave function in choice models. Thurstone actually tried to introduce something like it, with his discriminal processes, but from the start, discriminal processes were postulated to be independent rather than coherent random variables. Thus, I don't see much point in pushing the analogy of any DM problem with the Schroedinger cat problem, where the essence is superposition rather than independence.

My thoughts

OK, that was Dave talking. To address his last point, yes, I don't see where the complex wave function would come in. (Dsquared makes the same point in the comments to this entry. In probability theory we're all happy to use Boltzmann statistics (i.e., classical probability theory). I've never seen anyone make a convincing case (or even try to make a case) that, for example, Fermi-Dirac statistics should be used for making business decisions.)

But Dave's point above about "coherence" is exactly what I was talking about. Also there's the bit about the collapse of the wave function (or of the decision tree). But I suppose Dave would say that, without complex wavefunctions, there's no paradox there. With classical Boltzmann statistics, the cat really is just alive or dead all along, with no need for superposition of states

Jim Thompson's cat

Hmmm...my feeling is that the act of deliberation, or even just of keeping a decision "open" or "alive," creates a superposition of states. If I'm deciding whether or not to flip the switch, then I would't say that the cat is "either alive or dead." I haven't decided yet! In The Killer Inside Me, Jim Thompson writes, "How can you hurt someone that's already dead?", but I don't take such a fatalistic position.

Roger Penrose's consciousness

But hey, let's take this one step further. In my experiment (as opposed to Schroedinger's), the cat is alive or dead based on my decision of whether to flip a switch (and, in turn, this decision is ultimately coupled with other outcomes of interest; e.g., the switch also turns off the light in the next room, which encourages the lab assistant to go home for the day, and then he might bump into someone on the subway, etc., etc.). If it is true, as Penrose claims in The Emperor's New Mind, that consciousness is inherently quantum-mechanical and non-algorithmic, then my decision of whether to flip the switch indeed must be modeled as a superposition of wave functions. Although then I'm not quite sure how deliberation fits in to all this.

Anyway, to get more positivistic for a moment, maybe the next research step is to formulate some actual decision problems (or realistic-seeming fake problems) in terms of coherence, and see if anything useful comes of it.

P.S. Dave is very modest on his webpage but he's actually the deepest thinker I know of in decision analysis.

P.P.S. It's funny that Dave has a cat living in a "cat box," which I always thought was equivalent to the litterbox (so I recall from my catful days). Maybe "cat container" would be a better phrase?

I appreciated the comments on my recent entry on decision analysis and Schroedinger's cat.

Some comments

Chris sent some general links, and Simon and Dsquared referred to some specific desicion problems in finance--an area I know nothing about but certainly seems like a place where formal decision analysis would be useful.

Deb referred to the expected value of information (a concept I remember from teaching classes in decision analysis) and wonders why I have to bring quantum mechanics and Roger Penrose into the picture.

Why bring in quantum mechanics?

I bring up quantum mechanics for two reasons. First, making a decision has the effect of discretizing a continuous world. (Just as, in politics, a winner-take-all election converts a divided populace into a unidirectional mandate.) I see a strong analogy here to the collapsing of the wave function. To bring in a different physics analogy, decision-making crystallizes a fluid world into a single frozen choice.

The second connection to quantum mechanics connection arises because decisions are not made in isolation, and when we wait on a decision, it tends to get "entangled" with other decisions, producing a garden of forking paths that is a challenge to analyze. At some point--even, possibly, before the "expected value of additional information" crosses the zero line--decisions get made, or decision-making gets forced upon us, because it's just to costly for all concerned to live with all the uncertainty. (I wouldn't say this is true of all decisions or even most decisions, but it can arise, especially I think in decisions which are loosely coupled to other decisions--for example, a business decision that affects purchasing, hiring in other divisions, planning, etc.) This is the Penrose connection--that quantum states (or decisions) get resolved when they are entangled with enough mass.

P.S.

The other thing I learned is that links don't always work. Chris sent me this link, Simon sent this, and Dsquared sent this. My success: 0/3. 1 broken link and 2 with password required.

Publicity

| No Comments

Seth writes,

Hi Andrew,

I probably have you to thank for the fact that the abstract of my long self-experimentation paper appears in this month's Harper's Readings. I remember meeting through you a woman who used to assemble that section. And maybe she still does. Did you tell her about my paper?

I first saw the excerpt in an airport bookstore at Midway airport. On the flight from Chicago to Oakland I was miraculously seated next to someone who had bought that issue. (Shouldn't the odds of that be very low?) She started reading it. She eventually got to my abstract. "I read that," I said to her. "What do you think of it?" "I don't know what to make of it," she said. "What did you think of it?" "I liked it," I said.

Seth

Nice story. By the way, I assume that the article came to the attention of Harper's through this blurb in Marginal Revolution rather than from Alexandra Ringe, who used to work at Harper's. (Here's my take on Seth's article.)

P.S. regarding Seth's question about the low odds. I was once in the Cleveland airport when I was paged. It was for a different "Andy Gelman." I remember years ago reading a book by the mathematician J. Littlewood that had something about the frequency of rare events in one's life. So I Googled "littlewood coincidences" and found this quote from Freeman Dyson:

Littlewood's Law of Miracles states that in the course of any normal person's life, miracles happen at a rate of roughly one per month. The proof of the law is simple. During the time that we are awake and actively engaged in living our lives, roughly for eight hours each day, we see and hear things happening at a rate of about one per second. So the total number of events that happen to us is about thirty thousand per day, or about a million per month. With few exceptions, these events are not miracles because they are insignificant. The chance of a miracle is about one per million events. Therefore we should expect about one miracle to happen, on the average, every month.

Also mentioned here in Chance News.

One of the mysteries of quantum mechanics (as I recall from my days as a physics major, and from reading Roger Penrose's books) is the jump from complex probability amplitudes to observed outcomes, and the relation between observation and measurement. Heisenberg, 2-slit experiment, and that cat that's both alive and dead, until it's observed, at which point it becomes either alive or dead. As I recall from reading The Emperor's New Mind, Penrose believed that it was not the act of measurement that collapsed the cat's wavefunction, but rather the cat's (or, more precisely, the original electron whose state was uncertain) getting entangled with enough mass that the two possibilities could not simulteously exist.

OK, fine. I haven't done any physics since 1986 so I can't comment on this. But it reminded me of something similar in decision making.

Consider a decision that must be made at some unspecified but approximately-known time in the future. For example, a drug company must choose which among a set of projects to pursue (and does not have the resources to pursue all of them). The choice needs not be made immediately, and waiting will allow more information to be gathered to make a more informed decision. At the same time, the clock is ticking and there are losses associated with delay. In addition to the obvious losses (not going full-bore on a promising project leads to a later expected release date, thus fewer lives saved and less money made), waiting ties up other resources of suppliers, customers, etc. [Yes, this example is artificial--I'm sure I can think of something better--but please bear with me on the general point.]

So this is the connection to quantum mechanics. We have a decision, which will ultimately either kill a cat or not, and it makes sense to keep the decision open as long as possible, but at some point it becomes entangled with enough other issues that the decision basically makes itself, or, to put it another way, the decision just has to be made. The act of decision is equivalent to taking a measurement in the physical experiment.

I think there's something here, although I'm not quite sure what.

P.S. Further discussion here.

The morphing poster

| 1 Comment

Jim Liebman pointed me toward this news article that referred to our study of the death-penalty appeals process. I'll briefly discuss our findings, then give the news article, then give my reactions to the news article.

Here's the abstract of our paper, which appeared last year in the Journal of Empirical Legal Studies:

We collected data on the appeals process for all death sentences in U.S. states between 1973 and 1995. The reversal rate was high, with an estimated chance of at least two-thirds that any death sentence would be overturned by a state or federal appeals court. Multilevel regression models fit to the data by state and year indicate that high reversal rates are strongly associated with higher death-sentencing rates and lower rates of apprehending and imprisoning violent offenders. In light of our empirical findings, we discuss potential remedies including "streamlining" the appeals process and restricting the death penalty to the "worst of the worst" offenders.

"Frivolous" reversals?

Section III of our paper discusses reasons for reversal in detail. We found that most of the reversals at these two review stages occurred where the correct outcome of the trial was in doubt; the reversing courts found that, if it had not been for the error, there was a "reasonable probability" that the outcome would have been different.

More broadly, there is no evidence that judges are systematically disposed to ignore or frustrate the public will on the death penalty. About 90 percent of the reversals in our study were by elected state judges—-who generally need the support of a majority of the voters in order to take or remain in office. Most of the remaining reversals were by federal judges appointed by Republican presidents with strong law-and-order agendas.

The Reuters article

Social networks in academia

| No Comments

Gueorgi Kossinets (a Ph.D. student in our collective dynamics group here at Columbia) forwarded this article on the role of social networks in faculty hiring.

This reminds me that Tian once told me that I had the reputation of writing lukewarm letters of recommendation. Of course, I was thrilled to have any reputation at all, but I wasn't so happy that the rep was of not being nice. After that, I consciously ratcheted up my letters. For a few years, my letters probably had extra impact until people learned to normalize.

But I don't want to go too far and become like the well-known statistician whose letters are always so uniformly positive that they get calibrated down to zero. Not enough data around to use formal statistical adjustment.

Jouni pointed me to this course on information visualization by Ross Ihaka (one of the original authors of R).

It looks great (and should be helpful for me in preparing my new course in statistical graphics next spring). My only complaint is that it focuses so strongly on techniques without any theoretical discussion of how graphical methods relate to statistical ideas such as model checking and exploratory data analysis. (This is a particular interest of mine.)

I'll have to look over the notes in detail to see what I can learn. I use pretty sloppy programming techniques to make my graphs--I always have to do a lot of hand-tuning to get them to look just how I want--and I think Ihaka's more systematic approach could be helpful.

In the meantime, a few picky comments

Extra time on the SAT?

| No Comments

Newmark's Door links to the following story by Samuel Abrams about scores on College Board exams where disabled students get extra time. Apparently, if you can convincingly demonstrate that you need "special accomodation," you can get extra time on the SAT. Abrams writes,

David Budescu (a cognitive psychologist who has studied the perception of uncertainty) has the following thoughts on John Sides's work on overestimation of immigrants:

I don't have much to add to some of the comments. This overestimation is, probably, due to a combination of several factors: (a) different definions of the target event (the judges may generalize and assume, for example, that all the children of foreign born residents are also born abroad), (b) vividness (members of of the target population stand out -- looks, accent, language, clothing), (c) clustering (often they are concentrated in certain areas), (d) typically, these surveys don't employ incentives for truthful responding (i.e. proper scoring rules), and some people may respond "strategically" by inflating their estimates to make a political point.

In reference to the recent entry on misperception of minorities,
John Sides sent me the following data on the estimated, and actual, percentage of foreign-born residents in each of 20 European countries:

noimbro bar graph (443 x 587).jpg

The estimates are average survey responses in each country. People overestimate the % foreign born everywhere, but especially where the percentage is close to zero. This is consistent with the Erev, Wallsten, and Budescu findings about estimation of uncertain proportions.

John writes,

Recent Comments

  • subdee: Looks like the Rutgers R to me. read more
  • Andrew Gelman: Ceolaf: I don't agree with Lilla's attribution of the 2010 read more
  • Bill Jefferys: @Ted Dunning: This sounds similar to the Keller Plan. http://en.wikipedia.org/wiki/Keller_Plan read more
  • Mark Palko: I'd love to share but I'm not sure how many read more
  • ceolaf: 1) Lilla's view of the tea party are much read more
  • Andrew Gelman: Chris: You write: It's funny how, pre-election, political scientists were read more
  • K? O'Rourke: Mark: Would have some of those Excel spreadsheetsto teach high read more
  • K? O'Rourke: Mark: Would have some of those Excel spreadsheetsto teach high read more
  • Phil: I agree in a limited way with the other commenters, read more
  • Ian Fellows: @Cocuk: thanks for reiterating my point? @Anonymous I reject the read more
  • zbicyclist: @morgan: "... "Washington" is a machine that rewards those who read more
  • Morgan: "The Tea Party activists are conservative Republicans." I really don't read more
  • Chris: It's funny how, pre-election, political scientists were all predicting 20-25 read more
  • ziel: "The Tea Party activists are conservative Republicans. Are there any read more
  • Nick Cox: And indeed by many others over several decades, such as read more

About this Archive

This page is an archive of entries from July 2005 listed from newest to oldest.

June 2005 is the previous archive.

August 2005 is the next archive.

Find recent content on the main index or look in the archives to find all content.