z-logo
open-access-imgOpen Access
Analytical Methods for a Learning Health System: 2. Design of Observational Studies
Author(s) -
­Michael A. Stoto,
Michael Oakes,
Elizabeth A. Stuart,
Elisa L. Priest,
Lucy A. Savitz
Publication year - 2017
Publication title -
egems (generating evidence and methods to improve patient outcomes)
Language(s) - English
Resource type - Journals
ISSN - 2327-9214
DOI - 10.5334/egems.251
Subject(s) - counterfactual thinking , observational study , causation , causality (physics) , computer science , intervention (counseling) , causal inference , external validity , focus (optics) , natural experiment , confounding , counterfactual conditional , research design , clinical study design , internal validity , observational methods in psychology , data science , management science , risk analysis (engineering) , psychology , econometrics , medicine , engineering , statistics , mathematics , social psychology , clinical trial , physics , optics , pathology , quantum mechanics , psychiatry , political science , law
The second paper in a series on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review summarizes study design approaches, including choosing appropriate data sources, and methods for design and analysis of natural and quasi-experiments. The primary strength of study design approaches described in this section is that they study the impact of a deliberate intervention in real-world settings, which is critical for external validity. These evaluation designs address estimating the counterfactual - what would have happened if the intervention had not been implemented. At the individual level, epidemiologic designs focus on identifying situations in which bias is minimized. Natural and quasi-experiments focus on situations where the change in assignment breaks the usual links that could lead to confounding, reverse causation, and so forth. And because these observational studies typically use data gathered for patient management or administrative purposes, the possibility of observation bias is minimized. The disadvantages are that one cannot necessarily attribute the effect to the intervention (as opposed to other things that might have changed), and the results do not indicate what about the intervention made a difference. Because they cannot rely on randomization to establish causality, program evaluation methods demand a more careful consideration of the "theory" of the intervention and how it is expected to play out. A logic model describing this theory can help to design appropriate comparisons, account for all influential variables in a model, and help to ensure that evaluation studies focus on the critical intermediate and long-term outcomes as well as possible confounders.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom