One of the most promising developments in health and social policy research these days is the renewed push for using experimental designs to determine the effectiveness and efficiency of programs, policies, and implementation strategies in real-world settings. Random-assignment studies have been the gold standard in medical research for more than a half-century now because of the strong internal validity they provide, but these types of study designs are much less frequently used to study non-clinical interventions. The time and monetary costs of trials, logistical barriers, legal and ethical concerns, and the problem of weak external validity have led many health services researchers and policy implementers to shy away from randomized designs in favor of purely observational and quasi-experimental studies. Those large and massively expensive social experiments conducted in the 1970s and 1980s – like the RAND Health Insurance Experiment, the Negative Income Tax Experiment, and the COMMIT Smoking Cessation Trial – are probably partly to blame for our more recent trial-reluctance, despite the extremely valuable evidence generated by some (but not all) of these costly studies.
What’s driving enthusiasm for experimentation now are the concepts of the pragmatic trial and the large simple trial in the context of promoting comparative effectiveness research (CER) and a learning health system. By relaxing some of the requirements of a traditional randomized double-blind placebo-controlled clinical trial, it becomes possible to implement trials in real-world settings reflecting realistic policy and program choices and alternatives, thereby dramatically improving external validity without sacrificing much internal validity. Pragmatic trial designs can also reduce the monetary and time costs required to produce new evidence, such as by using existing data sources and reporting systems to monitor health and economic outcomes both before and after research subjects and/or settings are randomized to alternatives. And of course, it’s not just individual “patients” who can be randomly assigned to “treatment” alternatives – with a low-cost pragmatic trial, it becomes possible to randomly assign work teams, organizations, multi-organizational collaboratives, and even entire communities to different ways of doing things and different levels of exposure.
Research funders like the Patient Centered Outcomes Research Institute (PCORI) are now actively encouraging the use of pragmatic trials, mostly in the context of studying specific therapeutic interventions and their clinical delivery systems. And groups like MIT’s Jameel Poverty Action Lab (JPAL) are actively organizing low-cost pragmatic trials on a variety of health and social program interventions centered on poverty reduction, mostly in developing countries but now more recently in the U.S. The Coalition for Evidence Based Policy and the White House itself are also major proponents of this approach in U.S. health and social policy research.
These developments suggest that the time is right for American public health agencies and their partners who implement public health programs and policies across the U.S. to expand their use of pragmatic experimental trials. Many of the programs, policies, and delivery system strategies used in public health to prevent disease and injury and promote health on a population-wide basis have inadequate evidence concerning their health and economic impact. This fact is partly responsible for ongoing political and policy controversies concerning the ACA’s Prevention and Public Health Fund. Moreover, public health delivery systems in the U.S. are undergoing significant changes in their organization, financing, and operations due to economic and policy imperatives triggered by health care reform and public finance constraints. In the face of these policy uncertainties and pressures for change, why not incorporate pragmatic trials into our public health decision-making and implementation processes?
State and local public health agencies often have considerable (though perhaps under-appreciated) discretion over key details concerning how programs and policies are implemented. What types and levels of staffing to use, where and how to locate programs, how to recruit and engage target populations, how to tailor approaches for specific subgroups of interest, what mechanisms to use for disseminating and communicating information, what duration, sequencing and timing of activities to implement, how to divide roles and responsibilities among collaborating organizations, what financing and payment mechanisms to use – to the extent that these ingredients plausibly influence the effectiveness and efficiency of public health strategies, they represent promising targets for experimentation. Moreover, public health agencies are awash in existing data sources from both active and passive surveillance systems and program reporting requirements that can be used to structure pragmatic trials.
Powerful examples of pragmatic trials organized in public health settings are becoming more numerous, providing proof-of-concept that it is possible and worthwhile to organize such experiments. For example, a group of after-school programs for children in the Chicago area organized a field experiment to test different informational and material incentives designed to improve children’s food choices in a USDA-supported free meal program, showing that the introduction of small material incentives increased the take-up of healthy snacks by more than 400%. Similarly, a trial that I posted about earlier this summer from the ARM meeting tested the cost-effectiveness of a novel strategy for boosting child vaccination rates using a reminder and recall (R&R) intervention delivered centrally by local health departments in collaboration with community-based primary care practices. That study found that the collaborative, health department-delivered R&R model outperformed a standard physician practice-based R&R model both in terms of vaccination rates and in terms of costs, clearly showing the value of collective action involving public health agencies and primary care practices. Most recently the Coalition for Evidence Based Policy announced 3 new studies that will receive funding through its competition for low-cost randomized controlled trials, and 2 of these studies involve public health programs and delivery systems. One study in Durham NC costing just $183,000 will examine the health and economic impact of a postnatal nurse home visiting program, and another study conducted by the federal OSHA agency costing just $153,000 will test the effectiveness of a novel randomized workplace safety inspection policy that randomly selects worksites to receive onsite federal worker safety inspections.
Our National Coordinating Center for PHSSR is working to help create the conditions and infrastructure necessary to support pragmatic trials and other strong research designs in U.S. public health delivery system settings. For example, we have launched Practice-Based Research Networks (PBRNs) in more than 30 states that bring together state and local public health agencies and university-based researchers into ongoing research collaborations for the purposes of studying variation, change, and innovation in public health delivery. With several years of history in collaborative research now under their belts, many of our PBRNs are now well-positioned to progress to pragmatic trial designs wherein a network’s participating local public health settings can be randomly assigned to pursue different implementation approaches. One such trial is already underway in our Kentucky PBRN to test the effects of cultural competency training for local public health workers. A growing base of experience now exists with implementing studies that involve multiple PBRNs in the U.S., bringing in a larger number and diversity of communities and public health settings into the study design. Moreover, we recently launched a series of natural experiment studies in public health settings that, while not randomized, are helping both public health researchers and practitioners use more advanced research design and analytic methodologies like propensity-score matching and instrumental-variables estimation to support causal inferences and address threats to internal validity.
On the data and measurement front, our Center is working with colleagues at the University of Washington and other partners to standardize the measurement approaches and data sources used in state and local public health settings, in order to make large-scale pragmatic trials even more possible. For example, our recent Multi-Network Practice and Outcome Variation Examination (MPROVE) study has been working with PBRNs in six states to develop and implement a standard set of measures of public health delivery involving chronic disease prevention, communicable disease control, and environmental health protection –many of which are constructed using existing, routine data systems at state and local levels. These MPROVE measures, maintained over time, can provide a powerful data platform for supporting pragmatic trials in many different programmatic areas ranging from obesity prevention to food-borne illness control. Our Center also works to construct and analyze longitudinally linked analytic data files from a variety of other sources, including NACCHO’s periodic National Profile census survey of local health departments, the Census Bureau’s Annual Surveys of State & Local Government Finance, and our own National Longitudinal Survey of Public Health Systems which has followed a national cohort of communities since 1998.
To be sure, experimental designs are neither appropriate nor feasible for answering all of the questions of interest in empirical public health economics and PHSSR. But there are certainly many opportunities for using trials in public health settings to produce valuable evidence that are currently unrealized. Using resources like PBRNs and our expanding set of PHSSR measures and data sources, it is possible to employ pragmatic randomized trial designs more frequently to generate strong evidence about what works best for whom in public health delivery. Rigor and relevance need not be mutually exclusive.