z-logo
open-access-imgOpen Access
Telling Context from Mechanism in Realist Evaluation: The role for theory
Author(s) -
Hannah Jolly,
Lesley Jolly
Publication year - 2014
Publication title -
learning communities international journal of learning in social contexts
Language(s) - English
Resource type - Journals
eISSN - 2202-7904
pISSN - 1329-1440
DOI - 10.18793/lcj2014.14.03
Subject(s) - mechanism (biology) , context (archaeology) , epistemology , sociology , political science , philosophy , history , archaeology
Realist evaluation is based on the premise that aspects of context trigger particular mechanisms in response to an intervention, which result in observable outcomes. This is often expressed in the formula C+M=O. Contexts are defined as the conditions that an intervention operates in (often but not exclusively sociocultural), while mechanisms are understood to be the future action that people take in response to the intervention. There is much debate, however, about the definitions and because distinctions are not clear-cut it can be difficult to decide which is which, particularly when the intervention concerns some program of curricular intervention. In this paper we discuss how we resolved this dilemma in an evaluation of a curriculum change in 13 universities in Australia and New Zealand. In that case we found a cascade of contexts and mechanisms, whereby what was a mechanism from one point of view (such as the decisions involved in course design) became a context triggering later mechanisms (such as teacher and student behaviours). The scholarly literature defining curriculum helped us to organise our thinking and subsequent analysis in a rational way, but in many evaluations there may not be a handy body of work that discusses how to understand the topic of the intervention in this way, nor do many consultant evaluators have the luxury of long hours in the library. We consider some ways in which evaluators might decide on defining contexts and mechanisms in principled ways and some of the consequences of those decisions. Learning Communities International Journal of Learning in Social Contexts | Special Issue: Evaluation | Number 14 – September 2014 29 introduction The contribution of Pawson and Tilley’s (1997) realist approach to program evaluation has constituted a significant shift from available methods. It is most simply understood as a method for evaluating “what works for whom in what circumstances” (Pawson & Tilley, 1997). Rather than focus on global judgements about the worth of a program, it seeks to identify the varieties of success and failure that any program experiences and the factors that contribute to all of the eventual outcomes. The basic premise is that there will be a range of conditions, often sociocultural, that affect the outcomes of any program. These are referred to as Contexts (C). In addition the ways in which people respond – their reasoning about what they should do and the resources they can bring to bear (Pawson & Tilley, 1997, p.67) – will also vary. In the realist approach this is referred to as the Mechanism (M). Hypotheses about how the program results in observed outcomes (O) is often expressed in the formula C + M = O (CMO). The attraction of this approach lies in the fact that it notes real life programs are rarely entirely successful or entirely unsuccessful, but have patches of success and failure. Also, it is common to find that a program judged to have worked well in one place fails in another or in subsequent years. Realist Evaluation (RE) not only focusses on underlying factors behind outcomes but the various ways in which they can combine and recombine to cause outcomes. Since its publication, the approach has been widely taken up and applied with varying methodological success (Pawson & Manzano-Santaella, 2012), suggesting that the application of the method is not so simple. Pawson and Manzano-Santaella (2012, p. 176) have now published a discussion of some of the challenges of the “practice on the ground,” including the oft expressed problem of “I am finding it hard to distinguish Cs from Ms and Os, what is the secret?” (Pawson & Manzano-Santaella, 2012, p. 188). Whilst their paper discusses this issue in some detail, we will also address the subtleties of this challenge, and attempt to explore their recommendation that “which property falls under which category is determined by its explanatory role” (Pawson & Manzano-Santaella, 2012, p. 187). the challenges of applying the realist evaluation approach Whilst much of the discussion of the difficulties of applying the realist approach is given over to understanding the differences in function between Contexts and Mechanisms, this may be premature if a suitable understanding of the function of a CMO configuration as a whole is not applied to the process of evaluation. In their 2012 “workshop” on the method, Pawson and Manzano-Santaella (2012, p. 188) emphasise that “the function of CMO configurations...is that they are rather narrow and limited hypotheses, which attempt to tease out specific causal pathways, as prespecified mechanisms, acting in pre-specified contexts spill out into pre-specified and testable outcome patterns.” That is to say, these configurations are sensitive to the actual moment in the intervention process being considered. They need to be used at appropriate times and in appropriate ways during the data analysis if they are to help us to make meaningful evaluations. Telling Context from Mechanism in Realist Evaluation: The role for theory | Jolly 30 In our case, we had an idea of what the intervention was meant to achieve and how it was meant to achieve it, and we began analysis by trying to define contexts and mechanisms directly from the data. When we took this approach we found that it led us in circles. This is because the function of variables in a moment of analysis (that is, whether a variable acts as a C (Context) or as an M (Mechanism) is very much dependent on the focus of explanation at a given point in the analysis. Something which is a mechanism at one stage of an intervention, such as the reasoning leading to particular decisions about how to design and implement a program, may then produce a fresh context for a later stage, such as the way subjects strategise in response to the program design. This situation was complicated in the example evaluation by the fact that the program of intervention was taking place in multiple sites, and with differing purposes and methods of implementation in each site. We knew that the focus of explanation needed to vary from site to site, but had not yet pinned down how. Add to this that the program in question concerned a curricular innovation (the notion of curriculum being notoriously slippery), and we quickly discovered that analysis of the data we had collected was creating more questions than answers. As Pawson and Manzana-Santaella (2012, p. 178) reiterate, “realist evaluation is [or should be] avowedly theory-driven; it searches for and refines explanations of program effectiveness.” While it can be daunting to be told that more theory is needed, in our case it turned out that the theory that helped us to define the specific causal pathways to be investigated was a quite practical one about the nature of curriculum. While this is a highly debated topic, once we had settled on an understanding of what “curriculum” encompasses and how the various elements interact, the evaluation task became much easier. the example evaluation The evaluation in question was of a program of curricular innovation that had taken place at a variety of universities across Australia and New Zealand. The program involved the introduction of the Engineers Without Borders (EWB) Challenge into the first year engineering curriculum. The EWB Challenge was conceived as a means of exposing students to the principles of engineering design and problem solving, by providing a design challenge based on the requirements of a real, third-world community who have worked with EWB on sustainable development projects. This program of innovation constituted a “widespread curriculum renewal in engineering education”, because: The first year in engineering had traditionally focussed on basic science and maths and the introduction of the Challenge and its associated team-based project work allowed for development of the so-called “soft skills” amongst the graduate attributes: communication and teamwork and an understanding of the need for sustainable development. The Challenge has been in operation since 2008 and every engineering school in Australia has made some use of them at one time or another. This [evaluation] project was carried out with the co-operation of 13 Learning Communities International Journal of Learning in Social Contexts | Special Issue: Evaluation | Number 14 – September 2014 31 universities from Australia and New Zealand who have maintained their use of the projects, albeit in widely divergent types of student cohort and courses. (Jolly, 2014, p. 3) Thus, the evaluation was seeking to understand both how the program had been applied differently in different sites, and for different purposes, and what contributed to local success and failures. As such, the evaluation was focused on both process and outcome, in that it sought to discover both how the intervention worked, and with what effect. Realist Evaluation (RE) is ideal for this kind of multi-site, multi-context situation where correlations between variables are unlikely to apply in all cases and an understanding of the range of generative causation that can apply is required. In CMO terms, the ideal, desired operation of the intervention could be expressed in a highly compressed form (Table 1). It needs to be noted that there are dangers in such shorthand representations of CMO configurations (Pawson & Manzana-Santaella, 2012), which we will discuss further below. For now we acknowledge that this hypothesis about how the program should work includes many finer grained levels of CMO configuration. In fact it was the task of the evaluation to find out just what those finer-grained configurations were. Table 1: The ideal CMO configuration for the program (based on Jolly, 2014) Context (C) + Mechanism (M) = Outcome (O) • First year engineering curricula emphasise technical and theoretical subjects and pay little attention to practical “realworld” engineering. • Need to develop so-called “soft skills” such as c

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom