Difference between revisions of "Experimental Design"
From energypedia
***** (***** | *****) m |
***** (***** | *****) m |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | |||
+ | = Overview = | ||
+ | |||
+ | Experimental designs (also known as randomization) are generally considered the most robust of the evaluation methodologies. By randomly allocating the intervention among eligible beneficiaries, the assignment process itself creates comparable treatment and control groups that are statistically equivalent to one another, given appropriate sample sizes. This is a very powerful outcome because, in theory, the [[Control Groups|control groups]] generated through random assignment serve as a perfect counterfactual, free from the troublesome selection bias issues that exist in all evaluations.<ref name="World Bank - http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf">World Bank - http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf</ref> | ||
+ | |||
+ | <br/> | ||
= Experimental or Randomized Control Designs = | = Experimental or Randomized Control Designs = | ||
*The plan of an experiment, including selection of subjects, order of administration of the experimental treatment, the kind of treatment, the procedures by which it is administered, and the recording of the data (with special reference to the particular statistical analyses to be performed. | *The plan of an experiment, including selection of subjects, order of administration of the experimental treatment, the kind of treatment, the procedures by which it is administered, and the recording of the data (with special reference to the particular statistical analyses to be performed. | ||
− | * | + | *Randomization, in which the selection into the treatment and control groups is random within some well-defined set of people. In this case there should be no difference (in expectation) between the two groups besides the fact that the treatment group had access to the program. (There can still be differences due to sampling error; the larger the size of the treatment and control samples the less the error.). |
− | |||
+ | <br/> | ||
= Further Information = | = Further Information = | ||
+ | *[[Quasi-Experimental or Non-Experimental Designs|Quasi-Experimental or Non-Experimental Designs]] | ||
+ | *[[Portal:Impacts|Impact Portal on energypedia]] | ||
+ | <br/> | ||
= References = | = References = | ||
− | * | + | *Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd ed. Thousand Oaks, CA: Sage |
− | * | + | *[http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf Baker, J. L. (2000): Evaluating the Impact of Development Projects on Poverty. A Handbook for Practioners. The World Bank, Washington, D.C.] |
<references /> | <references /> | ||
[[Category:Impacts]] | [[Category:Impacts]] | ||
+ | [[Category:Questionnaires_/_Interviews]] |
Latest revision as of 16:52, 6 October 2014
Overview
Experimental designs (also known as randomization) are generally considered the most robust of the evaluation methodologies. By randomly allocating the intervention among eligible beneficiaries, the assignment process itself creates comparable treatment and control groups that are statistically equivalent to one another, given appropriate sample sizes. This is a very powerful outcome because, in theory, the control groups generated through random assignment serve as a perfect counterfactual, free from the troublesome selection bias issues that exist in all evaluations.[1]
Experimental or Randomized Control Designs
- The plan of an experiment, including selection of subjects, order of administration of the experimental treatment, the kind of treatment, the procedures by which it is administered, and the recording of the data (with special reference to the particular statistical analyses to be performed.
- Randomization, in which the selection into the treatment and control groups is random within some well-defined set of people. In this case there should be no difference (in expectation) between the two groups besides the fact that the treatment group had access to the program. (There can still be differences due to sampling error; the larger the size of the treatment and control samples the less the error.).
Further Information
References
- Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd ed. Thousand Oaks, CA: Sage
- Baker, J. L. (2000): Evaluating the Impact of Development Projects on Poverty. A Handbook for Practioners. The World Bank, Washington, D.C.