Thursday, Oct 7, 10:00-12:00 am CEST
Impact Evaluation - Mixed Methods
Triangulation is a key concept that embodies much of the rationale behind doing mixed method research and represents a set of principles to fortify the design, analysis and interpretation of findings in "Impact Evaluation". Triangulation is about looking at things from multiple points of view, a method “to overcome the problems that stem from studies relying upon a single theory, a single method, a single set of data […] and from a single investigator” (Mikkelsen).
There are different types of triangulation:
- Data triangulation — To study a problem using different types of data, different points in time, or different units of analysis;
- Investigator triangulation — Multiple researchers looking at the same problem;
- Discipline triangulation — Researchers trained in different disciplines looking at the same problem;
- Theory triangulation — Using multiple competing theories to explain and analyze a problem;
- Methodological triangulation— Using different methods, or the same method over time, to study a problem.
Advantages of mixed-methods approaches to impact evaluation are the following:
- A mix of methods can be used to assess important outcomes or impacts of the intervention being studied. If the results from different methods converge, then inferences about the nature and magnitude of these impacts will be stronger.
- A mix of methods can be used to assess different facets of complex outcomes or impacts, yielding a broader, richer portrait than one method alone can. Quantitative impact evaluation techniques work well for a limited set of pre-established variables (preferably determined and measured ex ante) but less well for capturing unintended, less expected (indirect) effects of interventions. Qualitative methods or descriptive (secondary) data analysis can be helpful in better understanding the latter.
- One set of methods could be used to assess outcomes or impacts and another set to assess the quality and character of program implementation, including program integrity and the experiences during the implementation phase.
- Multiple methods can help ensure that the sampling frame and the sample selectionstrategies cover the whole of the target intervention and comparison populations.
Quantitative Impact Evaluation
Quantitative impact evaluation have a comparative advantage in addressing the issue of attribution: Evaluations can either be experimental(or randomized control designed) as when the evaluator purposely collects data and designs evaluations in advance or quasi-experimental as when data are collected to mimic an experimental situation. Multiple regression analysis is the all-purpose technique that can be used in virtually all settings; when the experiment is organized in such a way that no controls are needed, a simple comparison of means can be used instead of a regression since it will give the same answer.
Three related problems that quantitative impact evaluation techniques attempt to address are the following:
- the establishment of a counterfactual: What would have happened in the absence of the intervention(s);
- the elimination of selection bias/effects, leading to differences between intervention group (or treatment group) and control group;
- a solution for the problem of unobservables: the omission of one or more unobserved variables, leading to biased estimates.
Qualitative Impact Evaluation
Qualitative methods for data collection play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being. Survey data collection, semi-structured interviews, and focus-group interviews are a few of the specific methods that are found throughout the landscape of methodological approaches to impact evaluation. Qualitative techniques cannot quantify the changes attributable to interventions but should be used to evaluate important issues for which quantification is not feasible or practical, and to develop complementary and in-depth perspectives on processes of change induced by interventions.
Randomised Control Trials (RCT)
Randomised Control Trials (RCT) originates from evidence based medicine trials where it is commonly used since the 1950s. Only since recently has RCT been used in other disciplines. For example, Esther Duflo, a French economist has conducted RCT to identify methods to alleviate poverty for over 15 years. This has resulted in some astonishing findings, e.g. de-worming as a method to bring children in the developing world to school.
RCT works by testing control groups with randomly selecting participants. Each control group is exactly the same; as participants are selected and allocated randomly. This eliminates the risk of allocation bios. Each group is given a different intervention which is the only attribute that is different in each group. For example, based on the study done by Esther Duflo, the first group is given free mosquito nets and the second group has purchased mosquito nets. Whilst the control groups the same, the only difference is that one group pays and the other doesn’t. Then the difference in usage is assessed.
The Process of Selecting a Sample Group
In randomised sampling the participants in a study are simply identified by chance. This is particularly important if the aim is to minimise the impact on a large group of people surveyed. Randomised sampling, in comparison to other methods, does not focus on specific groups e.g. genders, age or ethical backgrounds.
As for most sampling methods, it is often sufficient to only survey a fraction of a population. The results are then analysed and a conclusion is drawn about the entire population of concern.
- Determine number of people in entire group
- Determine desired “accuracy” of results. This is important for the statistical margin of error and will vary with sample size. The smaller the population, the larger the sample size and vice verse.
However, this can lead to errors in the conclusion. For example, to identify the colour of swans, 100 swans are analysed. All 100 swans are white and hence a conclusion is drawn that all swans must be white. However, it is possible that swan number 101 is black. This is referred to as the problem of indication.
Methods of Randomised Selection of Participants
- Flip a coin
- Roll dice
- Use a random number table
- Random digit dialling, using a computer system
- Lottery, e.g. RWI study Senegal
- Provides a mix of the population
- Gives a better sample of the entire population
- Requires little information of population
- Every person and every combination of groups has an equal chance of being included
- Easiest method of sampling
- No specific kind of group is selected
- The group cannot be controlled
- Large degree of randomness
- Possibly larger sampling error [more variation] from sample to sample
- Need to have a listing of population elements in some form
- Cost intensive research design and implementation (high number of interviews, big sample size)
- In terms of "give aways" raises expectations, which might not be fullfilled in future, and does not necessarily reflect reality
Randomised sampling control provides a better overview of the general situation in a specific target group. This is more representative of the market. For example, when products are sold they are not directly targeted at a specific end-user group but are simply sold in a shop where they are accessible for everyone. Products are bought depending on the needs and wants of the end-user. Whilst, randomised sampling does not focus on the decision maker that is of interest, it does not limit the results to a specific group and hence there is scope for a more accurate result and potentially revealing new results. The only difference between the control groups is a 1 variable.
Nowadays, participatory methods have become ‘mainstream’ tools in development in almost every area of policy intervention. Participatory evaluation approaches are built on the principle that stakeholders should be involved in some or all stages of the evaluation. In the case of impact evaluation this includes aspects such as the determination of objectives, indicators to be taken into account, as well as stakeholder participation in data collection and analysis.
Methodologies commonly included under this umbrella include:
- Participatory Impact Monitoring
- the Participatory Learning and Action (PLA) family
- including Rapid Rural Appraisal (RRA),
- Participatory Rural Appraisal (PRA),
- Participatory Poverty Assessment (PPA),
- Policy and Social Impact Analysis (PSIA),
- Social Assessment (SA).
- Catalogue of Methods for Impact Studies
- Control Groups
- Quasi-Experimental or Non-Experimental Designs
- Experimental Design
- Impact Portal on energypedia
- Impact Evaluations and Development: NONIE Guidance on Impact Evaluation 2009: URL:www.worldbank.org02/11/2009.
- Leeuw, Frans & Vaessen, Jos (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p.48- 50.
- Mikkelsen, B. (2005) Methods for development work and research, Sage Publications, Thousand Oaks, p. 96.
- Esther Duflo (2010):Social Experiments to fight poverty. Video on RCT
- URL: www.ted.com 14/06/2012
- Custominsight: Random Samples and Statistical Accuracy
- URL: www.custominsight.com 14/06/2012.
- Pine Forge Press (2004). Sampling: The world of probability and nonprobability sampling.
- URL (direct download ppt.): Pine Forge Press, Sampling '14/06/2012
- Frerichs, R.R. Rapid Surveys (unpublished) (2008): Simple Random Sampling
- URL: www.ph.ucla.edu '14/06/2012
- Kerry R., Timmons S. (2012): Philosophy of Randomised Controlled Trials - Part 1
- URL:www.youtube.com '14/06/2012
- The World Bank: http://bit.ly/1lzjiaE