z-logo
Premium
CAN QUASI‐EXPERIMENTAL EVALUATIONS THAT RELY ON STATE LONGITUDINAL DATA SYSTEMS REPLICATE EXPERIMENTAL RESULTS?
Author(s) -
Unlu Fatih,
Lauen Douglas Lee,
Fuller Sarah Crittenden,
Berglund Tiffany,
Estrera Elc
Publication year - 2021
Publication title -
journal of policy analysis and management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.898
H-Index - 84
eISSN - 1520-6688
pISSN - 0276-8739
DOI - 10.1002/pam.22295
Subject(s) - propensity score matching , natural experiment , replicate , covariate , matching (statistics) , selection bias , econometrics , replication (statistics) , attendance , unobservable , statistics , selection (genetic algorithm) , contrast (vision) , randomized experiment , confounding , baseline (sea) , average treatment effect , psychology , mathematics , computer science , economics , political science , machine learning , artificial intelligence , law , economic growth
Do quasi‐experimental (QE) studies conducted with baseline covariates that are typically available in the longitudinal administrative state databases yield unbiased effect estimates? This paper conducts a within‐study comparison (WSC) study that compares experimental impacts of early college high school (ECHS) attendance with QE impacts drawn from the state and locales. We find that (1) QE models for outcomes with natural (matching) pretests replicated the randomized benchmarks quite well; (2) the replication bias is not sensitive to type of propensity score model or method; and (3) imposing locational restrictions (i.e., local matching) on the comparison students––specifically choosing them from among non‐treatment students who came from the same feeder middle schools as the treatment students––does not decrease the QE bias; on the contrary, it performed worse than the models that did not impose this restriction for most outcomes. The first two findings are generally consistent with other education WSCs while the third one is not, suggesting that in cases where selection may be driven by individual‐level factors, such as this one, local matching may yield biased treatment effect estimates by greatly reducing the pool of potential comparison units and distorting balance on unobservable confounders while prioritizing balance on observable factors.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here