Make sure you register to our monthly newsletter, it's going out soon! Stay up do date about the latest energy news and our current activities.
Click here to register!

Difference between revisions of "Experimental Design"

From energypedia
***** (***** | *****)
m
***** (***** | *****)
m
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
  
 
= Overview =
 
= Overview =
 +
 +
Experimental designs (also known as randomization) are generally considered the most robust of the evaluation methodologies. By randomly allocating the intervention among eligible beneficiaries, the assignment process itself creates comparable treatment and control groups that are statistically equivalent to one another, given appropriate sample sizes. This is a very powerful outcome because, in theory, the [[Control Groups|control groups]] generated through random assignment serve as a perfect counterfactual, free from the troublesome selection bias issues that exist in all evaluations.<ref name="World Bank - http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf">World Bank - http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf</ref>
 +
 +
<br/>
  
 
= Experimental or Randomized Control Designs =
 
= Experimental or Randomized Control Designs =
Line 11: Line 15:
 
= Further Information =
 
= Further Information =
  
*[http://www.stat.yale.edu/Courses/1997-98/101/expdes.htm stat.yale.edu]
+
*[[Quasi-Experimental or Non-Experimental Designs|Quasi-Experimental or Non-Experimental Designs]]
*[[Quasi-Experimental_or_Non-Experimental_Designs|Quasi-Experimental or Non-Experimental Designs]]
+
*[[Portal:Impacts|Impact Portal on energypedia]]
  
 
<br/>
 
<br/>
Line 18: Line 22:
 
= References =
 
= References =
  
*''Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd ed. Thousand Oaks, CA: Sage''
+
*Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd ed. Thousand Oaks, CA: Sage
*''Baker, J. L. (2000): Evaluating the Impact of Development Projects on Poverty. A Handbook for Practioners. The World Bank, Washington, D.C.''
+
*[http://siteresources.worldbank.org/INTISPMA/Resources/handbook.pdf Baker, J. L. (2000): Evaluating the Impact of Development Projects on Poverty. A Handbook for Practioners. The World Bank, Washington, D.C.]
  
 
<references />
 
<references />
  
 
[[Category:Impacts]]
 
[[Category:Impacts]]
 +
[[Category:Questionnaires_/_Interviews]]

Latest revision as of 16:52, 6 October 2014

Overview

Experimental designs (also known as randomization) are generally considered the most robust of the evaluation methodologies. By randomly allocating the intervention among eligible beneficiaries, the assignment process itself creates comparable treatment and control groups that are statistically equivalent to one another, given appropriate sample sizes. This is a very powerful outcome because, in theory, the control groups generated through random assignment serve as a perfect counterfactual, free from the troublesome selection bias issues that exist in all evaluations.[1]


Experimental or Randomized Control Designs

  • The plan of an experiment, including selection of subjects, order of administration of the experimental treatment, the kind of treatment, the procedures by which it is administered, and the recording of the data (with special reference to the particular statistical analyses to be performed.
  • Randomization, in which the selection into the treatment and control groups is random within some well-defined set of people. In this case there should be no difference (in expectation) between the two groups besides the fact that the treatment group had access to the program. (There can still be differences due to sampling error; the larger the size of the treatment and control samples the less the error.).


Further Information


References