My post a few weeks ago on the APHA meeting suggested keeping an eye on the work of Frank Chaloupka’s tobacco control research group at UIC, and already we have an excellent example of why. Their new paper in BMJ’s Tobacco Control journal takes aim at the regulatory impact analysis recently conducted by the FDA on the proposed requirement that tobacco manufacturers use graphic warning labels (GWLs) on their products sold in the U.S. The FDA analysis was central to the U.S. Appeals Court’s recent decision to strike down the FDA’s first GWL regulations due (at least in part) to the lack of evidence that these regulations produce sizeable population-level reductions in tobacco use. The UIC-based team of Huang, Chaloupka and Fong shows that FDA’s analysis comes up short in several key methodological areas, resulting in an estimate of GWL’s impact on smoking rates that is 33-53 times (!) too small.
The UIC analysis showcases the scientific rigor that can be achieved when one combines: (1) a strong quasi-experimental research design; (2) longitudinal, retrospective data that contain as many pre and post observation periods as possible; and (3) careful approaches to measurement. Far too often, one or more of these key ingredients gets left off the table by public health researchers pursuing a quicker and easier approach, as appears to be the case with the FDA analysis. Chaloupka’s team uses a difference-in-difference design – a quasi-experimental approach that has been widely used in econometrics since at least the 1980s but is still quite under-utilized in public health. (Editorial note: I never let my public health doctoral students escape without exposure to DD). This team also pays careful attention to measuring cigarette prices as accurately as possible, rather than relying on more accessible proxy measures based on taxes.
The fruit of this labor is an empirical study in public health economics with enormous potential to inform regulatory and judicial decision-making. And (relevant to last week’s post), this work provides a reminder to never under-estimate the power of a well-designed quasi-experimental study.
Pingback: Are Public Health Strategies Responsible for the Dip in Childhood Obesity? | Public Health Economics