Difference between revisions of "Impact Evaluation - Mixed Methods"
***** (***** | *****) |
***** (***** | *****) |
||
Line 22: | Line 22: | ||
===== '''Quantitative impact evaluation''' ===== | ===== '''Quantitative impact evaluation''' ===== | ||
− | *Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: <span style="font-size: 10pt"><font face="Arial">Evaluations can either be [ | + | *Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: <span style="font-size: 10pt"><font face="Arial">Evaluations can either be [[Experimental_Design|<font color="#002bb8">experimental</font> (]]or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Quasi-Experimental%20or%20Non-Experimental%20Designs <font color="#002bb8">quasi-experimental</font>] as when data are collected to mimic an experimental situation. [http://www.endev.info/wiki/extensions/FCKeditor/fckeditor/editor/Multiple%20Regression%20Analysis <font color="#002bb8">Multiple regression analysis</font>] is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.</font></span><span style="font-size: 10pt"><font face="Arial"><span style="font-size: 10pt; font-family: 'arial','sans-serif'">Three related problems that quantitative impact evaluation techniques attempt to address are the following: </span></font></span> |
#<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the establishment of a ''counterfactual'': What would have happened in the absence of the intervention(s); </span> | #<span style="font-size: 10pt; font-family: 'arial','sans-serif'">the establishment of a ''counterfactual'': What would have happened in the absence of the intervention(s); </span> |
Revision as of 09:52, 17 November 2009
Methodological Triangulation
Triangulation is a key concept that embodies much of the rationale behind doing mixed method research and represents a set of principles to fortify the design, analysis and interpretation of findings in Impact Evaluation. Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).
There are different types of triangulation:
- Data triangulation—To study a problem using different types of data, different points in time, or different units of analysis
- Investigator triangulation—Multiple researchers looking at the same problem
- Discipline triangulation—Researchers trained in different disciplines looking at the same problem
- Theory triangulation—Using multiple competing theories to explain and analyze a problem
- Methodological triangulation—Using different methods, or the same method over time, to study a problem.
Mixed Methods
Advantages of mixed-methods approaches to impact evaluation are the following:
- A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
- A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
- One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity andthe experiences during the implementation phase.
- Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.
Quantitative impact evaluation
- Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be experimental (or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or quasi-experimental as when data are collected to mimic an experimental situation. Multiple regression analysis is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.Three related problems that quantitative impact evaluation techniques attempt to address are the following:
- the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
- the elimination of selection bias/effects, leading to differences between intervention group (or treatment group) and control group;
- a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.
Qualitative approaches
- Survey data collection and (descriptive) analysis, semi-structured interviews, and focus-group interviews are but a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation.Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.
Other approaches
- Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.
- Methodologies commonly included under this umbrella include: Appreciative Inquiry (AI), Citizen Report Cards (CRCs), Community Score Cards (CSCs), Beneficiary Assessment (BA), Participatory Impact Monitoring the Participatory Learning and Action (PLA) family including Rapid Rural Appraisal (RRA), Participatory Rural Appraisal (PRA), and Participatory Poverty Assessment (PPA), Policy and Social Impact Analysis (PSIA), Social Assessment (SA), Systematic Client Consultation (SSC), Self-esteem, associative strength, resourcefulness, action planning and responsibility (SARAR), and Objectives-Oriented Project Planning (ZOPP).
Sources:
Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL: http://www.worldbank.org/ieg/nonie/guidance.html 02/11/2009.
Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.