An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and impact?
Interventions are Theories: Opening Up the "black box"
The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. Several pieces of evidence can be used for articulating the intervention theory, for example:
- Logic models Resource: An intervention’s existing logical framework provides a useful starting point for mapping causal assumptions linked to objectives; other written documents produced within the framework of an intervention are also useful in this respect;
- Program theory: Insights provided by as well as expectations harbored by policy makers and staff (and other stakeholders) on how they think the intervention will affect/is affecting/has affected target groups;
- (Written) Evidence on past experiences of similar interventions (including those implemented by other organizations);
- Research literature on mechanisms and processes of change in certain institutional contexts, for particular social problems, in specific sectors, etc.
Methods for reconstructing the underlying assumptions of project/program/policy theories are the following:
- a policy-scientific method, which focuses on interviews, documents and argumentation analysis
- a strategic assessment method, which focuses on group dynamics and dialogue
- an elicitation method, which focuses on cognitive and organizational psychology
Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.
Testing Intervention Theories on Impact
After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are valid. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between two broad approaches.
- Causal Story: How and to what extent has the intervention produced results?
- Benchmark: Use the theory as an explicit benchmarkfor testing (some of) the assumptions in a formal manner.
In short, theory-based methodological designs can be situated anywhere in between ‘telling the causal story’ to ‘formally testing causal assumptions’.
The systematic development and corroboration of the causal story can be achieved through causal contribution analysis which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon chains of logical arguments (result chains) that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating alternative explanations for observed impacts. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.
The causal story is inferred from the following evidence:
- There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players.
- The activities of the intervention were implemented.
- The theory of change —or key elements thereof— is verified by evidence: the chain of expected results occurred.
- Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized
One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the magnitude of the impact as well as the extent to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the attribution problem.
To what extent can results of interest be attributed to an intervention? The attribution problem is often referred to as the central problem in impact evaluation. The central question is to what extent can changes in outcomes of interest be attributed to a particular intervention? Attribution refers both to isolating and measuring accurately the particular contribution of an intervention and ensuring that causality runs from the intervention to the outcome.
Proper analysis of the attribution problem is to compare the situation ‘with’ an intervention to what would have happened in the absence of an intervention, the ‘without’ situation (the counterfactual). Such comparison of the situation “with and without” the intervention is challenging since it is not possible to observe how the situation would have been without the intervention, and has to be constructed by the evaluator.
- Impact Portal on energypedia
- Impact Evaluation - Mixed Methods
- Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii.
- Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.
- Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).
- Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).
- van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34.
- Drivers of Adoption: Innovation and Behavioural Theory
- Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.
- Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, Canadian Journal of Program Evaluation 16(1), 1-24.
- ↑ Mayne, 2001