Difference between revisions of "Theory Based Approach"
***** (***** | *****) |
***** (***** | *****) |
||
Line 51: | Line 51: | ||
''Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii. '' | ''Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii. '' | ||
− | ''Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage. | + | ''Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage. |
− | |||
− | |||
''Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).'' | ''Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).'' |
Revision as of 09:24, 7 December 2009
Interventions are theories: opening up the "black box"
An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and impact?
The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. Several pieces of evidence can be used for articulating the intervention theory, for example:
- Logic models Resource: An intervention’s existing logical framework provides a useful starting point for mapping causal assumptions linked to objectives; other written documents produced within the framework of an intervention are also useful in this respect;
- Program theory: Insights provided by as well as expectations harbored by policy makers and staff (and other stakeholders) on how they think the intervention will affect/is affecting/has affected target groups;
- (Written) Evidence on past experiences of similar interventions (including those implemented by other organizations);
- Research literature on mechanisms and processes of change in certain institutional contexts, for particular social problems, in specific sectors, etc.
Methods for reconstructing the underlying assumptions of project/program/policy theories are the following:
- a policy-scientific method, which focuses on interviews, documents and argumentation analysis;
- a strategic assessment method, which focuses on group dynamics and dialogue, and;
- an elicitation method, which focuses on cognitive and organizational psychology.
Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.
Testing intervention theories on impact
After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are valid. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between two broad approaches.
- The first is that the theory constitutes the basis for constructing a‘causal story’ about how and to what extent the intervention has produced results. Usually different methods and sources of evidence are used to further refine the theory in an iterative manner until a credible and reliable causal story has been generated.
- The second way is to use the theory as an explicit benchmark for testing (some of) the assumptions in a formal manner. Besides providing a benchmark, the theory provides the template for method choice, variable selection and other data collection and analysis issues. This approach is typically applied in statistical analysis, but is not in any way restricted to this type of method.
- In short, theory-based methodological designs can be situated anywhere in between ‘telling the causal story’ to ‘formally testing causal assumptions’.
The systematic development and corroboration of the causal story can be achieved through causal contribution analysis(Mayne, 2001) which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon chains of logical arguments (result chains) that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating alternative explanations for observed impacts. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.
The causal story is inferred from the following evidence:
- There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players.
- The activities of the intervention were implemented.
- The theory of change—or key elements thereof— is verified by evidence: the chain of expected results occurred.
- Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized
One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the magnitude of the impact as well as the extent to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the attribution problem.
Source:
Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.
Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, Canadian Journal of Program Evaluation 16(1), 1-24.
Recommended Readings:
Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii.
Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.
Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).
Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).
van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34.