z-logo
Premium
The Use of Mixed Methods in Randomized Control Trials
Author(s) -
White Howard
Publication year - 2013
Publication title -
new directions for evaluation
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.374
H-Index - 40
eISSN - 1534-875X
pISSN - 1097-6736
DOI - 10.1002/ev.20058
Subject(s) - counterfactual thinking , attribution , computer science , psychological intervention , set (abstract data type) , quality (philosophy) , management science , control (management) , randomized experiment , task (project management) , intervention (counseling) , causal chain , impact evaluation , program evaluation , randomized controlled trial , process (computing) , process management , psychology , social psychology , artificial intelligence , political science , medicine , philosophy , surgery , management , epistemology , pathology , psychiatry , economics , programming language , operating system , business , public administration
Evaluations should be issues driven, not methods driven. The starting point should be priority programs to be evaluated or policies to be tested. From this starting point, a list of evaluation questions is identified. For each evaluation question, the task is to identify the best available method for answering that question. Hence it is likely that any one study will contain a mix of methods. A crucial question for an impact evaluation is that of attribution: What difference did the intervention make to the state of the world? (framed in any specific evaluation as the difference a clearly specified intervention or set of interventions made to indicators of interest). For interventions with a large number of units of assignment, this question is best answered with a quantitative experimental or quasi‐experimental design. And for prospective, or ex ante, evaluation designs a randomized control trial (RCT) is very likely to be the best available method for addressing this attribution question if it is feasible. But just the attribution question will be answered. A high‐quality impact evaluation will answer a broader range of evaluation questions of a more process nature, both to inform design and implementation of the program being evaluated and for external validity. Mixed methods combine the counterfactual analysis from an RCT with factual analysis with the use of quantitative and qualitative data to analyze the causal chain, drawing on approaches from a range of disciplines. The factual analysis will address such issues as the quality of implementation, targeting, barriers to participation, or adoption by intended beneficiaries. © Wiley Periodicals, Inc., and the American Evaluation Association .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here