z-logo
Premium
On the “How” of Social Experiments: Experimental Designs for Getting Inside the Black Box
Author(s) -
Bell Stephen H.,
Peck Laura R.
Publication year - 2016
Publication title -
new directions for evaluation
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.374
H-Index - 40
eISSN - 1534-875X
pISSN - 1097-6736
DOI - 10.1002/ev.20210
Subject(s) - creativity , computer science , design of experiments , black box , randomized experiment , research design , factorial experiment , program design language , management science , simple (philosophy) , artificial intelligence , psychology , machine learning , software engineering , sociology , social psychology , engineering , mathematics , social science , epistemology , statistics , philosophy
Program evaluators generally prefer to use the strongest design available to answer relevant impact questions, reserving analytic strategies for use only as necessary. Although the “simple” treatment versus control experimental design is well understood and widespread in its use, it is our contention that creativity in evaluation design can help answer more nuanced questions regarding what about a program is responsible for its impacts. In response, this chapter discusses several experimental evaluation designs that randomize individuals—including multiarmed, multistage, factorial, and blended designs—to permit estimating impacts for specific policy design features or program elements. We hope that recasting what are some long‐standing but underused designs in a new light will motivate their increased use, where appropriate.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here