Premium
WHAT WORKS BEST AND WHEN: ACCOUNTING FOR MULTIPLE SOURCES OF PURESELECTION BIAS IN PROGRAM EVALUATIONS
Author(s) -
Jung Haeil,
Pirog Maureen A.
Publication year - 2014
Publication title -
journal of policy analysis and management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.898
H-Index - 84
eISSN - 1520-6688
pISSN - 0276-8739
DOI - 10.1002/pam.21764
Subject(s) - estimator , selection bias , econometrics , selection (genetic algorithm) , propensity score matching , regression , statistics , randomized experiment , model selection , regression analysis , autoregressive model , computer science , mathematics , artificial intelligence
Most evaluations are still quasi‐experimental and most recent quasi‐experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better‐matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well different regression‐based estimators perform in terms of minimizing pure selection bias, that is, selection on unobservables. We examine the relative magnitudes of three sources of pure selection bias: heterogeneous response bias, time‐invariant individual heterogeneity (fixed effects [FEs]), and intertemporal dependence (autoregressive process of order one [AR(1)]). Because the relative magnitude of each source of pure selection bias may vary in different policy contexts, it is important to understand how well different regression‐based estimators handle each source of selection bias. Expanding simulations that have their origins in the work of Heckman, LaLonde, and Smith ([Heckman, J. J., 1999]), we find that difference‐in‐differences (DID) using equidistant pre‐ and postperiods and FEs estimators are less biased and have smaller standard errors in estimating the Treatment on the Treated (TT) than other regression‐based estimators. Our data analysis using the Job Training Partnership Act (JTPA) program replicates our simulation findings in estimating the TT.