Premium
When Do Regression‐Adjusted Performance Measures Track Longer‐Term Program Impacts? A Case Study for Job Corps
Author(s) -
Schochet Peter Z.,
Fortson Jane
Publication year - 2014
Publication title -
journal of policy analysis and management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.898
H-Index - 84
eISSN - 1520-6688
pISSN - 0276-8739
DOI - 10.1002/pam.21746
Subject(s) - disadvantaged , regression analysis , scale (ratio) , covariate , government (linguistics) , job performance , local government , computer science , econometrics , psychology , economics , job satisfaction , social psychology , economic growth , political science , linguistics , philosophy , physics , quantum mechanics , public administration , machine learning
The use of performance management systems has increased since the Government Performance and Results Act of 1993. While these systems share the goal of trying to improve service delivery and participant outcomes, they do not necessarily provide information on the causal (value‐added) effects of a program, which requires a rigorous impact evaluation. One approach for potentially improving the association between program performance measures and impacts is to adjust performance measures for differences across performance units in participant characteristics and local economic conditions. This article develops a statistical model that describes the conditions under which regression adjustment improves the performance–impact correlation. We then use the model to examine the performance–impact association using extensive data from a large‐scale random assignment evaluation of Job Corps, the nation's largest training program for disadvantaged youths. We find that while regression adjustment changes the Job Corps center performance measures, the adjusted performance measures are not correlated with the impact estimates. The main reasons are the weak associations between the unadjusted Job Corps performance measures and participants’ longer‐term outcomes as measured by the evaluation, as well as the likely presence of unobserved factors across centers that are correlated with outcomes.