Difference between revisions of "Impact Evaluation - Mixed Methods"

From energypedia
***** (***** | *****)
***** (***** | *****)
 
(14 intermediate revisions by 6 users not shown)
Line 1: Line 1:
see"[[Methoden|Methods]]"
+
 
 +
= Overview =
  
 
= Methodological Triangulation<br/> =
 
= Methodological Triangulation<br/> =
  
Triangulation is a key concept that embodies much of the rationale behind doing '''mixed method research&nbsp;&nbsp;'''and represents a set of principles to fortify the design, analysis and interpretation of findings in Impact Evaluation. Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).&nbsp;
+
'''Triangulation''' is a key concept that embodies much of the rationale behind doing '''mixed method research '''and represents a set of principles to fortify the design, analysis and interpretation of findings in "Impact Evaluation". Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).
  
There are different types of triangulation:
+
<u>There are different types of triangulation:</u>
 +
*Data triangulation — To study a problem using different types of data, different points in time, or different units of analysis;
 +
*Investigator triangulation — Multiple researchers looking at the same problem;
 +
*Discipline triangulation — Researchers trained in different disciplines looking at the same problem;
 +
*Theory triangulation — Using multiple competing theories to explain and analyze a problem;
 +
*Methodological triangulation— Using different methods, or the same method over time, to study a problem.
  
*<span style="font-size: 10pt"><font face="Arial">Data triangulation—To study a problem using different types of data, different points in time, or different units of analysis</font></span>
+
<br/>
*<span style="font-size: 10pt"><font face="Arial">Investigator triangulation—Multiple researchers looking at the same problem</font></span>
 
*<span style="font-size: 10pt"><font face="Arial">Discipline triangulation—Researchers trained in different disciplines looking at the same problem</font></span>
 
*<span style="font-size: 10pt"><font face="Arial">Theory triangulation—Using multiple competing theories to explain and analyze a problem</font></span>
 
*'''<span style="font-size: 10pt"><font face="Arial">Methodological triangulation</font></span>'''<span style="font-size: 10pt"><font face="Arial">—Using different methods, or the same method over time, to</font></span> <span style="font-size: 10pt"><font face="Arial">study a problem.</font></span>
 
  
 
= Mixed Methods<br/> =
 
= Mixed Methods<br/> =
  
Advantages of mixed-methods approaches to impact evaluation are the following:
+
<u>Advantages of mixed-methods approaches to impact evaluation are the following:</u>
 
 
 
*A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
 
*A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
 
*A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
 
*A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
 
*One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
 
*One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
*Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.<span style="font-size: 10pt"></span>
+
*Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.
 +
 
 +
<br/>
 +
 
  
= '''Quantitative impact evaluation'''<br/> =
 
  
Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: <span style="font-size: 10pt"><font face="Arial">Evaluations can either be <font color="#002bb8">[[Monitoring Glossary|experimental]]</font>&nbsp;(or randomized control designed)&nbsp;as when the evaluator purposely collects data and designs evaluations in advance or <font color="#002bb8">[[Monitoring Glossary|quasi-experimental]]</font> as when data are collected to mimic an experimental situation. <font color="#002bb8">[[Monitoring Glossary|Multiple regression analysis]]</font> is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.</font></span><span style="font-size: 10pt"><font face="Arial"><span style="font-size: 10pt; font-family: 'arial','sans-serif'">Three related problems that quantitative impact evaluation techniques attempt to address are the following:&nbsp;</span></font></span>
 
  
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the establishment of a ''counterfactual'': What would have happened in the absence of the intervention(s);&nbsp;</span>
+
= Quantitative Impact Evaluation<br/> =
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the elimination of <font color="#002bb8">''[[Monitoring Glossary|selection bias/effects]]'',</font> leading to differences between intervention group (or treatment group) and control group;&nbsp;</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">a solution for the problem of ''unobservables'': the omission of one or more unobserved variables, leading to biased estimates.&nbsp;</span>
 
  
= '''Qualitative&nbsp;impact evaluation'''&nbsp;<br/> =
+
Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be [[Monitoring Glossary|experimental]](or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or [[Monitoring Glossary|quasi-experimental]] as when data are collected to mimic an experimental situation. [[Monitoring Glossary|Multiple regression analysis]] is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.
  
Survey data collection and [[Monitoring Glossary|semi-structured interviews]], and [[Monitoring Glossary|focus-group interviews are]] but a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation.Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.
+
<u>Three related problems that quantitative impact evaluation techniques attempt to address are the following</u>:
  
= Randomised Control Trials (RCT) =
+
#the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
 +
#the elimination of [[Selection Bias|selection bias/effects]], leading to differences between intervention group (or treatment group) and control group;
 +
#a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.
  
Randomised Control Trials (RCT) originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.
+
► [[Materials for Quantitative Methods|Materials for Quantitative Methods]]
  
&nbsp;
+
► [[Quantitative_Methods|Quantitative Methods]]
  
RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.&nbsp;
 
  
== The process of selecting a sample group ==
+
= Qualitative Impact Evaluation<br/> =
 +
 
 +
Qualitative methods for data collection play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being.<ref name="The World Bank: http://bit.ly/1lzjiaE">The World Bank: http://bit.ly/1lzjiaE</ref> Survey data collection, [[Monitoring Glossary|semi-structured interviews]], and [[Monitoring Glossary|focus-group interviews are]] a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation. Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.
 +
 
 +
 
 +
= Randomised Control Trials (RCT)<br/> =
 +
 
 +
'''Randomised Control Trials (RCT)''' originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.
 +
 
 +
<br/>
 +
 
 +
RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.
 +
 
 +
<br/>
 +
 
 +
== The Process of Selecting a Sample Group<br/> ==
  
 
In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.
 
In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.
  
&nbsp;
+
<br/>
  
 
As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.
 
As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.
 
 
#Determine number of people in entire group
 
#Determine number of people in entire group
 
#Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.
 
#Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.
  
However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be whit. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.
+
However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be white. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.
  
== Methods of randomised selection of participants ==
+
<br/>
 +
 
 +
== Methods of Randomised Selection of Participants<br/> ==
  
 
*Flip a coin
 
*Flip a coin
 
*Roll dice
 
*Roll dice
 
*Use a random number table
 
*Use a random number table
*Random digit dialling, using a computer system.
+
*Random digit dialling, using a computer system
 +
*Lottery, e.g. RWI study Senegal
 +
 
 +
<br/>
  
 
== Advantages ==
 
== Advantages ==
Line 69: Line 89:
 
*Every person and every combination of groups has an equal chance of being included
 
*Every person and every combination of groups has an equal chance of being included
 
*Easiest method of sampling
 
*Easiest method of sampling
 +
 +
<br/>
  
 
== Disadvantages ==
 
== Disadvantages ==
Line 77: Line 99:
 
*Possibly larger sampling error [more variation] from sample to sample
 
*Possibly larger sampling error [more variation] from sample to sample
 
*Need to have a listing of population elements in some form
 
*Need to have a listing of population elements in some form
 +
*Cost intensive research design and implementation (high number of interviews, big sample size)
 +
*In terms of "give aways" raises expectations, which might not be fullfilled in future, and does not necessarily reflect reality
 +
 +
<br/>
  
 
== Conclusion ==
 
== Conclusion ==
  
 
Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.
 
Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.
<div>
 
= '''Other approaches''' =
 
</div>
 
<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.</span>
 
  
<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Methodologies commonly included under this umbrella include:</span>
+
<br/>
  
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20509352%7EmenuPK:1278203%7EpagePK:148956%7EpiPK:216618%7EtheSitePK:410306,00.html Participatory Impact Monitoring]</span>
+
= Other Approaches<br/> =
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the Participatory Learning and Action (PLA) family</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">including Rapid Rural Appraisal (RRA),</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20507691%7EpagePK:148956%7EpiPK:216618%7EtheSitePK:410306,00.html Participatory Rural Appraisal (PRA), ]and</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Participatory Poverty Assessment (PPA),</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Policy and Social Impact Analysis (PSIA),</span>
 
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Social Assessment (SA).</span>&nbsp;
 
  
&nbsp;
+
Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.
  
''Sources:&nbsp;''
+
<u>Methodologies commonly included under this umbrella include:</u>
 +
*[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20509352~menuPK:1278203~pagePK:148956~piPK:216618~theSitePK:410306,00.html Participatory Impact Monitoring]
 +
*the Participatory Learning and Action (PLA) family
 +
*including Rapid Rural Appraisal (RRA),
 +
*[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20507691~pagePK:148956~piPK:216618~theSitePK:410306,00.html Participatory Rural Appraisal (PRA),]
 +
*Participatory Poverty Assessment (PPA),
 +
*Policy and Social Impact Analysis (PSIA),
 +
*Social Assessment (SA).
  
'''''<span style="font-weight: normal; font-size: 10pt; font-family: 'arial','sans-serif'">Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:</span>'''<span style="font-weight: normal; font-size: 10pt; font-family: 'arial','sans-serif'">[http://www.worldbank.org/ieg/nonie/guidance.html <font color="#002bb8">http://www.worldbank.org/ieg/nonie/guidance.html</font>] 02/11/2009.</span>''
+
<br/>
  
''Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.&nbsp;''
 
  
''Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.''
 
  
''Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT''
+
= Further Information<br/> =
  
''URL:&nbsp;[http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty.html http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty.html]&nbsp;14/06/2012''
+
*[[Catalogue of Methods for Impact Studies|Catalogue of Methods for Impact Studies]]<br/>
 +
*[[Control Groups|Control Groups]]<br/>
 +
*[[Quasi-Experimental or Non-Experimental Designs|Quasi-Experimental or Non-Experimental Designs]]
 +
*[[Experimental Design|Experimental Design]]
 +
*[[Portal:Impacts|Impact Portal on energypedia]]
  
''Custominsight: Random Samples and Statistical Accuracy''
+
<br/>
<div id="ftn2">
 
''URL:&nbsp;[http://www.custominsight.com/articles/random-sampling.asp http://www.custominsight.com/articles/random-sampling.asp]&nbsp;14/06/2012''.&nbsp;
 
  
''Pine Forge Press (2004).&nbsp;''Sampling: The world of probability and nonprobability sampling.
 
  
''URL (direct download ppt.):&nbsp;[http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CF0QFjAC&url=http%3A%2F%2Fhomepages.wmich.edu%2F~wienir%2Fpowerpointslides%2F4%2520Chap5_Sampling.ppt&ei=ng7XT6amMaSs0QWd6qWBBA&usg=AFQjCNGTSN1kalQXYHKdhGu70oXtlKG7ng&sig2=TnTHP6V_4-R6vpoTaqJvcw http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CF0QFjAC&url=http%3A%2F%2Fhomepages.wmich.edu%2F~wienir%2Fpowerpointslides%2F4%2520Chap5_Sampling.ppt&ei=ng7XT6amMaSs0QWd6qWBBA&usg=AFQjCNGTSN1kalQXYHKdhGu70oXtlKG7ng&sig2=TnTHP6V_4-R6vpoTaqJvcw]&nbsp;''''14/06/2012''
+
= Rerferences<br/> =
  
''Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling<br/>''
+
*Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:[http://www.worldbank.org/ieg/nonie/guidance.html www.worldbank.org]02/11/2009.
 
+
*Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
''URL:&nbsp;[http://www.ph.ucla.edu/epi/rapidsurveys/RScourse/RSbook_ch3.pdf http://www.ph.ucla.edu/epi/rapidsurveys/RScourse/RSbook_ch3.pdf]&nbsp;''''14/06/2012''
+
*Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.
 
+
*Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT
''Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1&nbsp;''
+
*URL: [http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty.html www.ted.com] 14/06/2012
 +
*Custominsight: Random Samples and Statistical Accuracy
 +
<div id="ftn2">
 +
*URL: [http://www.custominsight.com/articles/random-sampling.asp www.custominsight.com] 14/06/2012.
 +
*Pine Forge Press (2004). Sampling: The world of probability and nonprobability sampling.
 +
*URL (direct download ppt.): [http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCEQFjAA&url=http://homepages.wmich.edu/~wienir/powerpointslides/4%20Chap5_Sampling.ppt&ei=QKAyVOeNE4SjPKO3gdgE&usg=AFQjCNGTSN1kalQXYHKdhGu70oXtlKG7ng&bvm=bv.76802529,d.ZWU Pine Forge Press, Sampling] '14/06/2012
 +
*Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling
 +
*URL: [http://www.ph.ucla.edu/epi/rapidsurveys/RScourse/RSbook_ch3.pdf www.ph.ucla.edu] '14/06/2012
 +
*Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1
 +
*URL:[http://www.youtube.com/watch?v=v8sLQk_KKFI www.youtube.com] '14/06/2012
  
''URL:&nbsp;[http://www.youtube.com/watch?v=v8sLQk_KKFI http://www.youtube.com/watch?v=v8sLQk_KKFI]&nbsp;''''14/06/2012''
+
<references />
 
</div>
 
</div>
  
 
[[Category:Impacts]]
 
[[Category:Impacts]]

Latest revision as of 11:44, 23 August 2021

Overview

Methodological Triangulation

Triangulation is a key concept that embodies much of the rationale behind doing mixed method research and represents a set of principles to fortify the design, analysis and interpretation of findings in "Impact Evaluation". Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).

There are different types of triangulation:

  • Data triangulation — To study a problem using different types of data, different points in time, or different units of analysis;
  • Investigator triangulation — Multiple researchers looking at the same problem;
  • Discipline triangulation — Researchers trained in different disciplines looking at the same problem;
  • Theory triangulation — Using multiple competing theories to explain and analyze a problem;
  • Methodological triangulation— Using different methods, or the same method over time, to study a problem.


Mixed Methods

Advantages of mixed-methods approaches to impact evaluation are the following:

  • A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
  • A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
  • One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
  • Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.




Quantitative Impact Evaluation

Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be experimental(or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or quasi-experimental as when data are collected to mimic an experimental situation. Multiple regression analysis is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.

Three related problems that quantitative impact evaluation techniques attempt to address are the following:

  1. the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
  2. the elimination of selection bias/effects, leading to differences between intervention group (or treatment group) and control group;
  3. a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.

Materials for Quantitative Methods

Quantitative Methods


Qualitative Impact Evaluation

Qualitative methods for data collection play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being.[1] Survey data collection, semi-structured interviews, and focus-group interviews are a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation. Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.


Randomised Control Trials (RCT)

Randomised Control Trials (RCT) originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.


RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.


The Process of Selecting a Sample Group

In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.


As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.

  1. Determine number of people in entire group
  2. Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.

However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be white. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.


Methods of Randomised Selection of Participants

  • Flip a coin
  • Roll dice
  • Use a random number table
  • Random digit dialling, using a computer system
  • Lottery, e.g. RWI study Senegal


Advantages

  • Provides a mix of the population
  • Gives a better sample of the entire population
  • Requires little information of population
  • Every person and every combination of groups has an equal chance of being included
  • Easiest method of sampling


Disadvantages

  • No specific kind of group is selected
  • The group cannot be controlled
  • Large degree of randomness
  • Possibly larger sampling error [more variation] from sample to sample
  • Need to have a listing of population elements in some form
  • Cost intensive research design and implementation (high number of interviews, big sample size)
  • In terms of "give aways" raises expectations, which might not be fullfilled in future, and does not necessarily reflect reality


Conclusion

Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.


Other Approaches

Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.

Methodologies commonly included under this umbrella include:



Further Information



Rerferences

  • Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:www.worldbank.org02/11/2009.
  • Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
  • Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.
  • Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT
  • URL: www.ted.com 14/06/2012
  • Custominsight: Random Samples and Statistical Accuracy
  • URL: www.custominsight.com 14/06/2012.
  • Pine Forge Press (2004). Sampling: The world of probability and nonprobability sampling.
  • URL (direct download ppt.): Pine Forge Press, Sampling '14/06/2012
  • Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling
  • URL: www.ph.ucla.edu '14/06/2012
  • Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1
  • URL:www.youtube.com '14/06/2012
  1. The World Bank: http://bit.ly/1lzjiaE