Difference between revisions of "Impact Evaluation - Mixed Methods"

From energypedia
***** (***** | *****)
***** (***** | *****)
 
(44 intermediate revisions by 10 users not shown)
Line 1: Line 1:
===== Methodological Triangulation  =====
 
  
Triangulation is a key concept that embodies much of the rationale behind doing '''mixed method research  '''and represents a set of principles to fortify the design, analysis and interpretation of findings in Impact Evaluation. Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen). 
+
= Overview =
  
There are different types of triangulation:
+
= Methodological Triangulation<br/> =
  
*<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Data triangulation—To study a problem using different types of data, different points in time, or different units of analysis</font></span>
+
'''Triangulation''' is a key concept that embodies much of the rationale behind doing '''mixed method research '''and represents a set of principles to fortify the design, analysis and interpretation of findings in "Impact Evaluation". Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).
*<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Investigator triangulation—Multiple researchers looking at the same problem</font></span>
 
*<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Discipline triangulation—Researchers trained in different disciplines looking at the same problem</font></span>
 
*<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Theory triangulation—Using multiple competing theories to explain and analyze a problem</font></span>
 
*'''<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Methodological triangulation</font></span>'''<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">—Using different methods, or the same method over time, to</font></span> <span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">study a problem.</font></span>
 
  
===== Mixed Methods  =====
+
<u>There are different types of triangulation:</u>
 +
*Data triangulation — To study a problem using different types of data, different points in time, or different units of analysis;
 +
*Investigator triangulation — Multiple researchers looking at the same problem;
 +
*Discipline triangulation — Researchers trained in different disciplines looking at the same problem;
 +
*Theory triangulation — Using multiple competing theories to explain and analyze a problem;
 +
*Methodological triangulation— Using different methods, or the same method over time, to study a problem.
  
Advantages of mixed-methods approaches to impact evaluation are the following:
+
<br/>
  
*A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
+
= Mixed Methods<br/> =
*A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
 
*One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity andthe experiences during the implementation phase.
 
*Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial"></font></span>
 
  
===== '''Quantitative impact evaluation'''  =====
+
<u>Advantages of mixed-methods approaches to impact evaluation are the following:</u>
 +
*A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
 +
*A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
 +
*One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
 +
*Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.
  
*Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: <span style="font-size: 10pt"><font face="Arial">Evaluations can either be [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Experimental%20Design <font color="#002bb8">experimental</font>]&nbsp;(or randomized control designed)&nbsp;as when the evaluator purposely collects data and designs evaluations in advance or [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Quasi-Experimental%20or%20Non-Experimental%20Designs <font color="#002bb8">quasi-experimental</font>] as when data are collected to mimic an experimental situation. [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Multiple%20Regression%20Analysis <font color="#002bb8">Multiple regression analysis</font>] is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.</font></span><span style="font-size: 10pt"><font face="Arial"><span style="font-size: 10pt; font-family: 'arial','sans-serif'">Three related problems that quantitative impact evaluation techniques attempt to address are the following:&nbsp;</span></font></span>
+
<br/>
  
#<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the establishment of a ''counterfactual'': What would have happened in the absence of the intervention(s);&nbsp;</span>
 
#<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the elimination of [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Selection%20Bias <font color="#002bb8">''selection bias/effects'',</font>] leading to differences between intervention group (or treatment group) and control group;&nbsp;</span>
 
#<span style="font-size: 10pt; font-family: 'arial','sans-serif'">a solution for the problem of ''unobservables'': the omission of one or more unobserved variables, leading to biased estimates.&nbsp; </span>
 
  
===== '''Qualitative approaches'''&nbsp;  =====
 
  
*<span style="font-size: 10pt; mso-bidi-font-family: arial"><font face="Arial">Survey data collection and [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Descriptive%20Analysis <font color="#002bb8">(descriptive) analysis</font>], [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Semi-structured%20Interviews <font color="#002bb8">semi-structured interviews</font>], and [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Focus%20Group%20Interviews <font color="#002bb8">focus-group interviews are</font>] but a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation.</font></span><span style="font-size: 9pt; font-family: 'univers-condensed','sans-serif'; mso-bidi-font-family: univers-condensed">Qualitative techniques cannot quantify the changes attributable to interventions but should be used </span><span style="font-size: 9pt; font-family: 'univers-condensed','sans-serif'; mso-bidi-font-family: univers-condensed">to evaluate important issues for which quantification is not feasible or practical, and to develop complementary </span><span style="font-size: 9pt; font-family: 'univers-condensed','sans-serif'; mso-bidi-font-family: univers-condensed">and in-depth perspectives on processes of change induced by interventions.</span>
 
  
===== '''Other approaches'''  =====
+
= Quantitative Impact Evaluation<br/> =
  
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis. </span>
+
Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be [[Monitoring Glossary|experimental]](or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or [[Monitoring Glossary|quasi-experimental]] as when data are collected to mimic an experimental situation. [[Monitoring Glossary|Multiple regression analysis]] is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.
*<span style="font-size: 10pt; font-family: 'arial','sans-serif'">Methodologies commonly included under this umbrella include: Appreciative Inquiry (AI), Citizen Report Cards (CRCs), Community Score Cards (CSCs), Beneficiary Assessment (BA), Participatory Impact Monitoring the Participatory Learning and Action (PLA) family including Rapid Rural Appraisal (RRA), '''Participatory Rural Appraisal (PRA), '''and Participatory Poverty Assessment (PPA), Policy and Social Impact Analysis (PSIA), Social Assessment (SA), Systematic Client Consultation (SSC), Self-esteem, associative strength, </span><span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">resourcefulness, action planning and responsibility (SARAR), and Objectives-Oriented Project Planning (ZOPP).</span>
 
  
&nbsp;
+
<u>Three related problems that quantitative impact evaluation techniques attempt to address are the following</u>:
  
''Sources:&nbsp;''
+
#the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
 +
#the elimination of [[Selection Bias|selection bias/effects]], leading to differences between intervention group (or treatment group) and control group;
 +
#a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.
  
''Leeuw, Frans &amp; Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.&nbsp;''
+
► [[Materials for Quantitative Methods|Materials for Quantitative Methods]]
  
'''''<span style="font-weight: normal; font-size: 10pt; font-family: 'arial','sans-serif'; mso-bidi-font-weight: bold">Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL: </span>'''<span style="font-weight: normal; font-size: 10pt; font-family: 'arial','sans-serif'; mso-bidi-font-weight: bold">[http://www.worldbank.org/ieg/nonie/guidance.html <font color="#002bb8">http://www.worldbank.org/ieg/nonie/guidance.html</font>] 02/11/2009.</span>''
+
[[Quantitative_Methods|Quantitative Methods]]
  
''Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.''
+
 
 +
= Qualitative Impact Evaluation<br/> =
 +
 
 +
Qualitative methods for data collection play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being.<ref name="The World Bank: http://bit.ly/1lzjiaE">The World Bank: http://bit.ly/1lzjiaE</ref> Survey data collection, [[Monitoring Glossary|semi-structured interviews]], and [[Monitoring Glossary|focus-group interviews are]] a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation. Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.
 +
 
 +
 
 +
= Randomised Control Trials (RCT)<br/> =
 +
 
 +
'''Randomised Control Trials (RCT)''' originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.
 +
 
 +
<br/>
 +
 
 +
RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.
 +
 
 +
<br/>
 +
 
 +
== The Process of Selecting a Sample Group<br/> ==
 +
 
 +
In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.
 +
 
 +
<br/>
 +
 
 +
As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.
 +
#Determine number of people in entire group
 +
#Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.
 +
 
 +
However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be white. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.
 +
 
 +
<br/>
 +
 
 +
== Methods of Randomised Selection of Participants<br/> ==
 +
 
 +
*Flip a coin
 +
*Roll dice
 +
*Use a random number table
 +
*Random digit dialling, using a computer system
 +
*Lottery, e.g. RWI study Senegal
 +
 
 +
<br/>
 +
 
 +
== Advantages ==
 +
 
 +
*Provides a mix of the population
 +
*Gives a better sample of the entire population
 +
*Requires little information of population
 +
*Every person and every combination of groups has an equal chance of being included
 +
*Easiest method of sampling
 +
 
 +
<br/>
 +
 
 +
== Disadvantages ==
 +
 
 +
*No specific kind of group is selected
 +
*The group cannot be controlled
 +
*Large degree of randomness
 +
*Possibly larger sampling error [more variation] from sample to sample
 +
*Need to have a listing of population elements in some form
 +
*Cost intensive research design and implementation (high number of interviews, big sample size)
 +
*In terms of "give aways" raises expectations, which might not be fullfilled in future, and does not necessarily reflect reality
 +
 
 +
<br/>
 +
 
 +
== Conclusion ==
 +
 
 +
Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.
 +
 
 +
<br/>
 +
 
 +
= Other Approaches<br/> =
 +
 
 +
Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.
 +
 
 +
<u>Methodologies commonly included under this umbrella include:</u>
 +
*[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20509352~menuPK:1278203~pagePK:148956~piPK:216618~theSitePK:410306,00.html Participatory Impact Monitoring]
 +
*the Participatory Learning and Action (PLA) family
 +
*including Rapid Rural Appraisal (RRA),
 +
*[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTSOCIALDEVELOPMENT/EXTPCENG/0,,contentMDK:20507691~pagePK:148956~piPK:216618~theSitePK:410306,00.html Participatory Rural Appraisal (PRA),]
 +
*Participatory Poverty Assessment (PPA),
 +
*Policy and Social Impact Analysis (PSIA),
 +
*Social Assessment (SA).
 +
 
 +
<br/>
 +
 
 +
 
 +
 
 +
= Further Information<br/> =
 +
 
 +
*[[Catalogue of Methods for Impact Studies|Catalogue of Methods for Impact Studies]]<br/>
 +
*[[Control Groups|Control Groups]]<br/>
 +
*[[Quasi-Experimental or Non-Experimental Designs|Quasi-Experimental or Non-Experimental Designs]]
 +
*[[Experimental Design|Experimental Design]]
 +
*[[Portal:Impacts|Impact Portal on energypedia]]
 +
 
 +
<br/>
 +
 
 +
 
 +
= Rerferences<br/> =
 +
 
 +
*Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:[http://www.worldbank.org/ieg/nonie/guidance.html www.worldbank.org]02/11/2009.
 +
*Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
 +
*Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.
 +
*Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT
 +
*URL: [http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty.html www.ted.com] 14/06/2012
 +
*Custominsight: Random Samples and Statistical Accuracy
 +
<div id="ftn2">
 +
*URL: [http://www.custominsight.com/articles/random-sampling.asp www.custominsight.com] 14/06/2012.
 +
*Pine Forge Press (2004). Sampling: The world of probability and nonprobability sampling.
 +
*URL (direct download ppt.): [http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCEQFjAA&url=http://homepages.wmich.edu/~wienir/powerpointslides/4%20Chap5_Sampling.ppt&ei=QKAyVOeNE4SjPKO3gdgE&usg=AFQjCNGTSN1kalQXYHKdhGu70oXtlKG7ng&bvm=bv.76802529,d.ZWU Pine Forge Press, Sampling] '14/06/2012
 +
*Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling
 +
*URL: [http://www.ph.ucla.edu/epi/rapidsurveys/RScourse/RSbook_ch3.pdf www.ph.ucla.edu] '14/06/2012
 +
*Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1
 +
*URL:[http://www.youtube.com/watch?v=v8sLQk_KKFI www.youtube.com] '14/06/2012
 +
 
 +
<references />
 +
</div>
 +
 
 +
[[Category:Impacts]]

Latest revision as of 11:44, 23 August 2021

Overview

Methodological Triangulation

Triangulation is a key concept that embodies much of the rationale behind doing mixed method research and represents a set of principles to fortify the design, analysis and interpretation of findings in "Impact Evaluation". Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).

There are different types of triangulation:

  • Data triangulation — To study a problem using different types of data, different points in time, or different units of analysis;
  • Investigator triangulation — Multiple researchers looking at the same problem;
  • Discipline triangulation — Researchers trained in different disciplines looking at the same problem;
  • Theory triangulation — Using multiple competing theories to explain and analyze a problem;
  • Methodological triangulation— Using different methods, or the same method over time, to study a problem.


Mixed Methods

Advantages of mixed-methods approaches to impact evaluation are the following:

  • A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
  • A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
  • One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
  • Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.




Quantitative Impact Evaluation

Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be experimental(or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or quasi-experimental as when data are collected to mimic an experimental situation. Multiple regression analysis is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.

Three related problems that quantitative impact evaluation techniques attempt to address are the following:

  1. the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
  2. the elimination of selection bias/effects, leading to differences between intervention group (or treatment group) and control group;
  3. a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.

Materials for Quantitative Methods

Quantitative Methods


Qualitative Impact Evaluation

Qualitative methods for data collection play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being.[1] Survey data collection, semi-structured interviews, and focus-group interviews are a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation. Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.


Randomised Control Trials (RCT)

Randomised Control Trials (RCT) originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.


RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.


The Process of Selecting a Sample Group

In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.


As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.

  1. Determine number of people in entire group
  2. Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.

However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be white. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.


Methods of Randomised Selection of Participants

  • Flip a coin
  • Roll dice
  • Use a random number table
  • Random digit dialling, using a computer system
  • Lottery, e.g. RWI study Senegal


Advantages

  • Provides a mix of the population
  • Gives a better sample of the entire population
  • Requires little information of population
  • Every person and every combination of groups has an equal chance of being included
  • Easiest method of sampling


Disadvantages

  • No specific kind of group is selected
  • The group cannot be controlled
  • Large degree of randomness
  • Possibly larger sampling error [more variation] from sample to sample
  • Need to have a listing of population elements in some form
  • Cost intensive research design and implementation (high number of interviews, big sample size)
  • In terms of "give aways" raises expectations, which might not be fullfilled in future, and does not necessarily reflect reality


Conclusion

Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.


Other Approaches

Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.

Methodologies commonly included under this umbrella include:



Further Information



Rerferences

  • Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:www.worldbank.org02/11/2009.
  • Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
  • Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.
  • Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT
  • URL: www.ted.com 14/06/2012
  • Custominsight: Random Samples and Statistical Accuracy
  • URL: www.custominsight.com 14/06/2012.
  • Pine Forge Press (2004). Sampling: The world of probability and nonprobability sampling.
  • URL (direct download ppt.): Pine Forge Press, Sampling '14/06/2012
  • Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling
  • URL: www.ph.ucla.edu '14/06/2012
  • Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1
  • URL:www.youtube.com '14/06/2012
  1. The World Bank: http://bit.ly/1lzjiaE