More on problems with surveys estimating deaths in war zones

Andrew Mack writes:

There was a brief commentary from the Benetech folk on the Human Security Report Project’s, “The Shrinking Costs of War” report on your blog in January.

But the report has since generated a lot of public controversy. Since the report–like the current discussion in your blog on Mike Spagat’s new paper on Iraq–deals with controversies generated by survey-based excess death estimates, we thought your readers might be interested.

Our responses to the debate were posted on our website last week. “Shrinking Costs” had discussed the dramatic decline in death tolls from wartime violence since the end of World War II –and its causes. We also argued that deaths from war-exacerbated disease and malnutrition had declined. (The exec. summary is here.)

One of the most striking findings was that mortality rates (we used under-five mortality data) decline during most wars. Indeed our latest research indicates that of the total number of years that countries were involved in warfare between 1970 and 2008, the child mortality rate increases in only 5% of them. Les Roberts has strongly challenged these findings.

We didn’t of course mean that war is good for people’s health, simply that recent conflicts have rarely been rarely deadly enough to reverse the long-term secular decline in child mortality brought about by improved living conditions and a range of low cost public health interventions–notably immunization. These are part of what UNICEF has aptly described as “the revolution in child survival”.

But many people had suggested to us that these findings seemed totally at odds with the extraordinary, survey-derived, 5.4 million excess death toll for the Congo (1998-2007) produced by the International Rescue Committee (IRC). The IRC is a well-respected humanitarian NGO and its widely-cited, peer-reviewed survey-based excess death toll estimates had not been subjected to any of the public controversy that has surrounded excess war death estimates in Iraq and Darfur.

But when we delved into the IRC’s reports we uncovered many methodological and data problems which had the net effect of greatly, and, we believe, unwarrantably, increasing the excess death toll.

The first two surveys–there were five in toto–were not based on representative samples and the evidence suggests that the excess death tolls that were derived from them were far too high.

When we plotted the increase in the IRC’s child mortality rate for the first two survey periods and compared it with the child mortality trend data from a Demographic and Health Survey that covered the same period, it was clear that something was seriously wrong. The bright blue U5MR trend line in the graph below is the one derived from the IRC’s survey data. If the DHS data (dark blue and red trend lines) is even roughly correct, the IRC’s estimate of 2.5 million excess deaths for this period must be far too high. In the new millennium the IRC’s U5MR levels off but remains approximately twice that of the DHS–both cannot be correct.

Screen shot 2010-04-28 at 8.22.29 AM.png

The second major problem was that the IRC researchers used the sub-Saharan African (SSA) average mortality rate as the baseline mortality rate for their excess death calculations. This makes little sense since the Congo is far from being an average African country–it languishes at, or near, the bottom of just about every development indicator for the region.

When we re-ran the IRC’s calculations for the last three surveys using a higher and–we argue–more realistic baseline rate, the excess death toll shrank from more than 2.8 million to less than 900,000. The point we were making was not that our baseline estimate was necessarily correct, but rather that a modest and wholly defensible increase in the baseline rate can lead to a huge change in the excess death toll.

The IRC and Les Roberts (who was the lead researcher for the IRC’s first two surveys) both challenged our findings shortly after we launched in late January. Their responses were widely circulated in the media and generated a lot of commentary in the blogosphere. We have just published detailed rebuttals of these responses on our website. This page provides (a) a very brief intro to the debate, (b) a link to an overview of our take on the main issues in the debate, (c) detailed responses to the IRC and Les Roberts critiques.

Interesting points. I have not looked into the matter (beyond what I’ve discussed in earlier blog entries) so I’ll just put this out there for people to consider on their own.

On a technical matter, the above graph is pretty good but could be improved:
– Label the lines directly rather than with a color code
– Put fewer axis labels (x can be 1970, 1980, 1990, 2000, and y can be 0, 10%, 20%, 30%) and make the labels larger and thus more readable.
– Remove the horizontal lines and the ugly shading.
Why don’t people do these things automatically? The key, as always, is to think of the graph as a form of communication, and ask what point is served by each item in the graph. Tufte reasoning, if you will.