The term control group is used when the evaluation employs an experimental design and the term comparison group is associated with a quasi-experimental design. In non-experimental design, program participants are compared to non-participants by controlling statistically for differences between participants and non-participants. These three evaluation designs vary in feasibility, cost, the degree of clarity and validity of results, and the degree of selection bias.
Control groups are included in evaluation designs in order to estimate the impact of a development measure.
The main challenge in determining impacts results from the fact that it is impossible to observe, how people undergoing an intervention would have acted if this intervention had not occurred. Only a well designed form of this so-called allows for robust statements about the impact that is actually attributable to the intervention.
In general, evaluation is concerned with measuring the effects of an intervention (or ‘treatment’) such as a training course or electrification. The determination of the true effects of electrification on, for example, income would require comparing the outcome variable income after the household has received the treatment to the situation of not having received it. The fundamental problem here is that households can never be observed in the situation of not-having received electrification if they are in fact electrified.
To solve this, the unobservable situation of not being electrified has to be replaced in the evaluation design by a group of non-electrified households, the control group. The control group is hence included in order to simulate how the outcome variable would have developed if the intervention would not have taken place.
In order to effectively fulfill this task, this control group should be composed of individuals that
- have not undergone the intervention, but
- live under similar general conditions (e.g. access to road infrastructure) and
- exhibit – on average – similar socio-economic characteristics (e.g. educational background).
Often the non-connected households in a target region of an electrification project are taken as a control group. This group, though, is generally not able to simulate the connected household’s behavior in the hypothetical non-electrified scenario. The reason is that the two groups tend to show differences in education, income, motivation and other household characteristics. For example, richer households are more inclined to get connected. Including the non-connected households as control group in an electrification evaluation would therefore be deficient in the sense that the estimated effect of the intervention would be biased by these differences.
A second obvious comparison group is the beneficiary group itself at the point of time before the intervention – when these households have not yet been electrified. The problem here is that it can rarely ruled out that other environmental factors such as the general economic development have changed. As a consequence, it cannot be separated which part of the change in outcomes can be attributed to the electrification intervention and which part to the environmental change.
The most promising approach to select a control group is to include a comparable region into the study design that has not been and will not be electrified. This allows comparing the changes in both regions before and after electrification, excluding both biases from changing environmental factors and differences between individual households. The selection of such regions, though, requires some methodological skills and field experiences. In some cases it might also not be possible to find comparable regions that will not be electrified or treated by any other development intervention.
Options for Choice of Control Region
The obvious counterfactual situation for the stove and electrification projects would be:
- Option A: a region where these interventions have not been taken place before. However,
- Option B: provides as well for a viable counterfactual.
While Option A investigates in how far the project sites outperform the control regions, Option B explores in how far the project sites catch up in relation to the control regions. The crucial point is that between the before and the after survey, no such intervention takes places. At this point, one disadvantage of Option A becomes evident: It is often not for sure that during that time period the control region actually remains non-electrified or without improved stoves interventions respectively. It has already happened that unforeseen activities by other donors or partner country institutions in the course of the project period rendered the implementation of this approach impossible. On the other hand, Option A has the advantage that it can serve as an “ex-ante impact assessment”. This implies that the information gathered in the control regions helps to anticipate the beneficiary’s behaviour, potential difficulties and opportunities to be expected in the wake of the EnDev project before project implementation. These findings can then potentially be fed back into the project design.
To what extent can results of interest be attributed to an intervention? The attribution problem is often referred to as the central problem in impact evaluation. The central question is to what extent can changes in outcomes of interest be attributed to a particular intervention? Attribution refers both to isolating and measuring accurately the particular contribution of an intervention and ensuring that causality runs from the intervention to the outcome.
Importance of Counterfactual Analysis
Proper analysis of the attribution problem is to compare the situation ‘with’ an intervention to what would have happened in the absence of an intervention, the ‘without’ situation (the counterfactual). Such comparison of the situation “with and without” the intervention is challenging since it is not possible to observe how the situation would have been without the intervention, and has to be constructed by the evaluator.
- ↑ Word Bank - http://bit.ly/1oLVNIX
- ↑ Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation - http://siteresources.worldbank.org/EXTOED/Resources/nonie_guidance.pdf
--> Back to Impact Monitoring Guidelines