The Comparative Effectiveness of Preparedness

The CDC’s anthrax event earlier this week provides a reason to reflect on the uncertainties and difficult policy choices involved in reducing the health risks posed by large-scale disasters. Modern societies face a complex and evolving mix of potential disasters, ranging from intentional acts of violence and terrorism, to uncontrolled viral pandemics, to the weather-produced emergencies fueled by climate change. A new paper from economists at the London School of Economics and MIT offers some useful insight into a critical but too-often obscured question in the preparedness field: how should governments decide which potential disasters to prevent and prepare for, and how much to invest in such strategies?

Ian Martin and Robert Pindyck’s paper shows that cost-benefit analysis of a single strategy to avert a catastrophe, considered in isolation, is unlikely to give us the correct answer about the health and economic value of pursuing that strategy. The policy interdependence of potential catastrophic events means that it is often not socially or economically optimal to avert all such events, even when an apparently cost-effective strategy is available for each hazard. To use preparedness speak, the optimal “all hazards” approach to risk mitigation may not, in fact, seek to avert all hazards.

A key reason for this result is that the existence of one hazard often increases the benefit of averting another hazard, due to the law of diminishing returns. For example, a policy to contain the threat of pandemic influenza will produce more social benefit when other hazards also threaten the population, like catastrophic flooding or industrial accidents. As these background or competing hazards diminish, so too does the value of averting the primary hazard. Martin and Pindyck’s theoretical analysis shows that, when seeking to avert multiple hazards, “the benefits are additive but the costs are multiplicative,” so policymakers should seek to identify the subset of hazard mitigation strategies that maximize net benefit to society.

In practice, identifying this optimal subset of hazard prevention and preparedness strategies can be difficult in the presence of many possible hazards and many ways of reducing risk, which vary across communities and population groups. To address these decision challenges, the authors point to a need for: (1) better measures of risk, vulnerability and resilience; and (2) more research on the effectiveness and cost of prevention and preparedness strategies.

Fortunately, initiatives such as the National Health Security Preparedness Index program are pushing on both fronts – for improved metrics and for expanded research – promising eventually to enable vast improvements in evidence-informed preparedness policy. And even though the CDC’s federal Preparedness and Emergency Response Research Centers Program is sunsetting, this initiative has spawned numerous lines of inquiry that are producing valuable evidence about the effectiveness of preparedness strategies (see for example work that Mary Davis and I have done with colleagues at UNC’s preparedness center to explicate the roles of local public health capabilities in producing preparedness). Some of the most promising recent progress in preparedness research is exploring what makes communities resilient to various types of hazards, and how to promote and reinforce such resilience. See for example Malcolm William’s latest research on this topic presented at last week’s AcademyHealth meeting.

Continued progress in these lines of inquiry will bring us closer to understanding the comparative effectiveness of preparedness: which combination of strategies produce the greatest net benefits, for which populations, and in which community contexts. This type of CER evidence is needed not just for treatment choices facing individual patients, but also for health policy choices facing populations.

Leave a comment