Difference between revisions of "Theory Based Approach"

From energypedia
***** (***** | *****)
***** (***** | *****)
m
 
(42 intermediate revisions by 6 users not shown)
Line 1: Line 1:
==== '''Interventions are theories: opening up the "black box"'''  ====
 
  
An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and impact?
+
= Overview<br/> =
  
The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. Several pieces of evidence can be used for '''articulating the intervention theory''', for example:  
+
An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and [[Portal:Impacts|impact]]?
  
*an intervention’s existing '''logical framework''' provides a useful starting point for mapping causal assumptions linked to objectives; other written documents produced within the framework of an intervention are also useful in this respect;
+
<br/>
*insights provided by as well as '''expectations '''harbored by policy makers and staff (and other stakeholders) on how they think the intervention will affect/is affecting/has affected target groups;
 
*(written) '''evidence '''on past experiences of similar interventions (including those implemented by other organizations);
 
*'''research literature '''on mechanisms and processes of change in certain institutional contexts, for particular social problems, in specific sectors, etc.
 
  
<br>
+
= Interventions are Theories: Opening Up the "black box" =
  
==== '''Theories are partly "hidden" and require reconstruction''' ====
+
The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. <u>Several pieces of evidence can be used for '''articulating the intervention theory''', for example:</u>
 +
*Logic models Resource: An intervention’s existing [[Monitoring Glossary|logical framework provides]] a useful starting point for mapping causal assumptions linked to objectives; other written documents produced within the framework of an intervention are also useful in this respect;
 +
*Program theory: Insights provided by as well as expectations harbored by policy makers and staff (and other stakeholders) on how they think the intervention will affect/is affecting/has affected target groups;
 +
*(Written) Evidence on past experiences of similar interventions (including those implemented by other organizations);
 +
*Research literature on mechanisms and processes of change in certain institutional contexts, for particular social problems, in specific sectors, etc.
  
'''Methods for reconstructing''' the underlying assumptions of project/program/policy theories are the following:
+
<br/>
  
*'''a policy-scientific method''', which focuses on interviews, documents and argumentation analysis;
+
= Theories are Partly "hidden" and Require Reconstruction<br/> =
*'''a strategic assessment method''', which focuses on group dynamics and dialogue, and;
 
*'''an elicitation method''', which focuses on cognitive and organizational psychology.
 
  
Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.
+
<u>'''Methods for reconstructing''' the underlying assumptions of project/program/policy theories are the following:</u>
 +
*a policy-scientific method, which focuses on interviews, documents and argumentation analysis
 +
*a strategic assessment method, which focuses on group dynamics and dialogue
 +
*an elicitation method, which focuses on cognitive and organizational psychology
  
<br>
+
<br/>
  
==== '''Testing intervention theories on impact'''  ====
+
Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.
  
After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are '''valid'''. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between '''two broad approaches'''.
+
<br/>
  
*The first is that the theory constitutes the basis for constructing a'''‘causal story’''' about how and to what extent the intervention has produced results. Usually different methods and sources of evidence are used to further refine the theory in an iterative manner until a credible and reliable causal story has been generated.
+
= Testing Intervention Theories on Impact<br/> =
*The second way is to use the theory as an explicit '''benchmark '''for testing (some of) the assumptions in a formal manner. Besides providing a benchmark, the theory provides the template for method choice, variable selection and other data collection and analysis issues. This approach is typically applied in statistical analysis, but is not in any way restricted to this type of method.
 
*In short, theory-based methodological designs can be situated anywhere '''in between ‘telling the causal story’ to ‘formally testing causal assumptions’'''.
 
  
The systematic development and corroboration of the causal story can be achieved through '''''causal contribution analysis'''''(Mayne, 2001) which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon [http://www.endev.info/wiki/index.php?title=MDGs_and_Result_Chains '''chains of logical arguments'''&nbsp;]&nbsp;(result chains) that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating '''alternative explanations for observed impacts'''. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.  
+
After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions '''are valid'''. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between two broad approaches.
 +
*'''Causal Story''': How and to what extent has the intervention produced results?
 +
*'''Benchmark''': Use the theory as an explicit benchmarkfor testing (some of) the assumptions in a formal manner.
  
The '''causal story''' is inferred from the following evidence:
+
<br/>
  
*There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players.
+
In short, theory-based methodological designs can be situated anywhere in between ‘telling the causal story’ to ‘formally testing causal assumptions’.
*<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The activities of the intervention were implemented. </span>
 
*<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The theory of change—or key elements thereof— is verified by evidence: the chain of expected results occurred.&nbsp;</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'; mso-fareast-language: en-us; mso-ansi-language: de; mso-fareast-font-family: calibri; mso-fareast-theme-font: minor-latin; mso-bidi-language: ar-sa">Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized</span>
 
  
One of the '''key limitations''' in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the ''magnitude ''of the impact as well as ''the extent ''to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the '''[[The attribution problem|attribution problem]]'''.  
+
The systematic development and corroboration of the causal story can be achieved through ''causal contribution analysis<ref name="Mayne, 2001">Mayne, 2001</ref>'' which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon chains of logical arguments ([[Millennium Development Goals (MDGs) and Result Chains|result chains]]) that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating alternative explanations for observed impacts. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.
  
<br><br>'''''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Source:</font></span>''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">&nbsp;</font></span>'''
+
<br/>
  
''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Leeuw, F. &amp; Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.</font></span>''
+
== Causal Story<br/> ==
  
''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial"><span style="font-size: 10pt"><font face="Arial">Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, </font></span></font></span>''Canadian Journal of Program Evaluation ''16(1), 1-24.''
+
<u>The '''causal story''' is inferred from the following evidence:</u>
 +
*There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players.
 +
*The activities of the intervention were implemented.
 +
*The theory of change —or key elements thereof— is verified by evidence: the chain of expected results occurred. <span></span>
 +
*Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized
  
<br>
+
<br/>
  
'''<span id="1255517683234S">''R''</span><span id="1255517683234S">''ecommended Readings:''</span>&nbsp;'''  
+
One of the '''key limitations''' in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the ''magnitude ''of the impact as well as ''the extent ''to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the attribution problem.
  
''Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii.&nbsp;''
+
<br/>
  
''Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.''
+
== Attribution Problem<br/> ==
  
''Kellogg Foundation (2001): Logic Model Development Guide: Using LogicModels to Bring Together Planning, Evaluation, &amp; Action. URL: <span>http://www.wkkf.org/default.aspx?tabid=101&amp;CID=281&amp;CatID=281&amp;ItemID=2813669&amp;NID=20&amp;LanguageID=0</span>''
+
To what extent can results of interest be attributed to an intervention? The attribution problem is often referred to as the central problem in impact evaluation. The central question is to what extent can changes in outcomes of interest be ''attributed ''to a particular intervention? Attribution refers both to isolating and measuring accurately the particular contribution of an intervention and ensuring that causality runs from the intervention to the outcome.
  
''Leeuw, F. (2003):&nbsp;Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).''
+
<br/>
  
''Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).''
+
= Counterfactual Analysis<br/> =
  
''van der Knaap, P. (2004): Theory-based Evaluation and Learning&nbsp;: Possibilities and Challenges.In Evaluation, 10 (1), 16-34.&nbsp;''
+
Proper analysis of the attribution problem is to compare the situation ‘with’ an intervention to what would have happened in the absence of an intervention, the ‘without’ situation (the ''counterfactual''). Such comparison of the situation ''“with and without” ''the intervention is challenging since it is not possible to observe how the situation would have been without the intervention, and has to be constructed by the evaluator.
 +
 
 +
<br/>
 +
 
 +
= Further Information =
 +
 
 +
*[[Portal:Impacts|Impact Portal on energypedia]]
 +
*[[Impact Evaluation - Mixed Methods|Impact Evaluation - Mixed Methods]]
 +
*Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii.
 +
*Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.
 +
*Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).
 +
*Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).
 +
*van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34.
 +
*[[Drivers of Adoption: Innovation and Behavioural Theory|Drivers of Adoption: Innovation and Behavioural Theory]]<br/>
 +
 
 +
<br/>
 +
 
 +
= References =
 +
 
 +
*Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.
 +
*Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, Canadian Journal of Program Evaluation 16(1), 1-24.
 +
 
 +
<references />
 +
 
 +
[[Category:Monitoring]]
 +
[[Category:Impacts]]

Latest revision as of 12:11, 9 October 2014

Overview

An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and impact?


Interventions are Theories: Opening Up the "black box"

The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. Several pieces of evidence can be used for articulating the intervention theory, for example:

  • Logic models Resource: An intervention’s existing logical framework provides a useful starting point for mapping causal assumptions linked to objectives; other written documents produced within the framework of an intervention are also useful in this respect;
  • Program theory: Insights provided by as well as expectations harbored by policy makers and staff (and other stakeholders) on how they think the intervention will affect/is affecting/has affected target groups;
  • (Written) Evidence on past experiences of similar interventions (including those implemented by other organizations);
  • Research literature on mechanisms and processes of change in certain institutional contexts, for particular social problems, in specific sectors, etc.


Theories are Partly "hidden" and Require Reconstruction

Methods for reconstructing the underlying assumptions of project/program/policy theories are the following:

  • a policy-scientific method, which focuses on interviews, documents and argumentation analysis
  • a strategic assessment method, which focuses on group dynamics and dialogue
  • an elicitation method, which focuses on cognitive and organizational psychology


Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.


Testing Intervention Theories on Impact

After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are valid. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between two broad approaches.

  • Causal Story: How and to what extent has the intervention produced results?
  • Benchmark: Use the theory as an explicit benchmarkfor testing (some of) the assumptions in a formal manner.


In short, theory-based methodological designs can be situated anywhere in between ‘telling the causal story’ to ‘formally testing causal assumptions’.

The systematic development and corroboration of the causal story can be achieved through causal contribution analysis[1] which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon chains of logical arguments (result chains) that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating alternative explanations for observed impacts. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.


Causal Story

The causal story is inferred from the following evidence:

  • There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players.
  • The activities of the intervention were implemented.
  • The theory of change —or key elements thereof— is verified by evidence: the chain of expected results occurred.
  • Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized


One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the magnitude of the impact as well as the extent to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the attribution problem.


Attribution Problem

To what extent can results of interest be attributed to an intervention? The attribution problem is often referred to as the central problem in impact evaluation. The central question is to what extent can changes in outcomes of interest be attributed to a particular intervention? Attribution refers both to isolating and measuring accurately the particular contribution of an intervention and ensuring that causality runs from the intervention to the outcome.


Counterfactual Analysis

Proper analysis of the attribution problem is to compare the situation ‘with’ an intervention to what would have happened in the absence of an intervention, the ‘without’ situation (the counterfactual). Such comparison of the situation “with and without” the intervention is challenging since it is not possible to observe how the situation would have been without the intervention, and has to be constructed by the evaluator.


Further Information

  • Impact Portal on energypedia
  • Impact Evaluation - Mixed Methods
  • Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii.
  • Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.
  • Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).
  • Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).
  • van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34.
  • Drivers of Adoption: Innovation and Behavioural Theory


References

  • Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.
  • Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, Canadian Journal of Program Evaluation 16(1), 1-24.
  1. Mayne, 2001