Premium
I. INTRODUCTION
Author(s) -
Laura A. Thompson,
Gin Morgan,
Kellie Ann Jurado,
Megan R. Gunnar
Publication year - 2015
Publication title -
monographs of the society for research in child development
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.618
H-Index - 63
eISSN - 1540-5834
pISSN - 0037-976X
DOI - 10.1111/mono.12208
Subject(s) - citation , psychology , library science , computer science
Despite the importance of knowing whether development programs achieve their objectives, impact evaluations remain rare in developing economies. This is unfortunate. With the growing use of results-based management by governments, determining whether goals have been attained and convincingly linking changes to specific programs has become increasingly critical. Tracking such outcomes as gains in school enrollment or reductions in infant mortality is indispensable. But simply gathering good data on outcomes sheds little light on why objectives have or have not been met. For this reason, impact evaluations should be a key instrument in policymakers’ monitoring and evaluation toolbox. Impact evaluations rely on the construction of a counterfactual—an attempt to estimate what a given outcome would have been for the beneficiaries of a program if the program had not been implemented. Impact evaluations thus address causality and allow results to be attributed to specific interventions. The challenge of evaluation research arises from the fact that the counterfactual outcome is inherently unobservable, because people cannot simultaneously participate and not participate in a program. The four social fund evaluation studies in this issue illustrate that establishing a counterfactual is usually a matter of using statistical or econometric techniques to construct a control or comparison group.