Difference between revisions of "Quasi-Experimental or Non-Experimental Designs"

From energypedia
***** (***** | *****)
m
***** (***** | *****)
m
 
(5 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  
= <span style="line-height: 20.390625px;">Matching Methods or Constructed Controls</span> =
+
= Overview =
  
''Matching methods or constructed controls'', in which one tries to pick an ideal comparison that matches the treatment group from a larger survey. The most widely used type of matching is ''propensity score matching'', in which the comparison group is matched to the treatment group on the basis of a set of observed characteristics or by using the “propensity score” (predicted probability of participation given observed characteristics); the closer the propensity score, the better the match. Agood comparison group comes from the same economic environment and was administered the same questionnaire by similarly trained interviewers as the treatment group.<ref name="worldbank">http://www-wds.worldbank.org/servlet/WDSContentServer/WDSP/IB/2000/08/19/000094946_00080705302127/Rendered/INDEX/multi_page.txtfckLR*''Double difference or difference-in-differences ''methods, in which one compares a treatment and comparison group (first difference) before and after a program (second difference). Comparatorsshould be dropped when propensity scores are used and if they have scores outside the range observed for the treatment group.<ref name="worldbank">_</ref><br/>
+
'''Quasi-Experiment: '''A quasi-experimental design is an empirical study, almost like an [[Experimental Design|experimental design]] but without random assignment. Quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition but using some criterion other than random assignment (e.g., an eligibility cutoff mark). In some cases, the researcher may have control over assignment to treatment condition.<ref>Wikipedia: http://en.wikipedia.org/wiki/Quasi-experiment</ref>
  
 
<br/>
 
<br/>
  
= Instrumental Varialbles or Statistical Control Methods =
+
'''Non-Experiment: '''the researcher cannot control, manipulate or alter the predictor variable or subjects, but instead, relies on interpretation, observation or interactions to come to a conclusion. Typically, this means the non-experimental researcher must rely on correlations, surveys or case studies, and cannot demonstrate a true cause-and-effect relationship. Non-experimental research tends to have a high level of external validity, meaning it can be generalized to a larger population.<ref name="Education Portal: http://education-portal.com/academy/lesson/non-experimental-and-experimental-research-differences-advantages-disadvantages.html#lesson">Education Portal: http://education-portal.com/academy/lesson/non-experimental-and-experimental-research-differences-advantages-disadvantages.html#lessonfckLR</ref>
  
''Instrumental variables or statistical control ''methods, in which one uses one or more variables that matter to participation but not to outcomes given participation. This identifies the exogenous variation in outcomes attributable to the program, recognizing that its placement is not random but purposive. The “instrumental variables” are first used to predict program participation; then one sees how the outcome indicator varies with the predicted values.<ref name="worldbank">_</ref><br/>
+
<br/>
 +
 
 +
= <span style="line-height: 20.390625px">Matching Methods or Constructed Controls</span> =
 +
 
 +
Matching methods or constructed controls, in which one tries to pick an ideal comparison that matches the treatment group from a larger survey. The most widely used type of matching is propensity score matching, in which the comparison group is matched to the treatment group on the basis of a set of observed characteristics or by using the “propensity score” (predicted probability of participation given observed characteristics); the closer the propensity score, the better the match. A good comparison group comes from the same economic environment and was administered the same questionnaire by similarly trained interviewers as the treatment group.<ref name="M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh">M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh</ref><br/>
 +
 
 +
<br/>
 +
 
 +
= Instrumental Variables or Statistical Control Methods =
 +
 
 +
Instrumental variables or statistical control methods, in which one uses one or more variables that matter to participation but not to outcomes given participation. This identifies the exogenous variation in outcomes attributable to the program, recognizing that its placement is not random but purposive. The “instrumental variables” are first used to predict program participation; then one sees how the outcome indicator varies with the predicted values.<ref name="M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh">M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh</ref><br/>
  
 
<br/>
 
<br/>
Line 14: Line 24:
 
= Reflexive Comparisons =
 
= Reflexive Comparisons =
  
''Reflexive comparisons'', in which a baseline survey of participants is done before the intervention and a follow-up survey is done after. The baseline provides the comparison group, and impact is measured by the change in outcome indicators before and after the intervention.<ref name="worldbank">_</ref><br/>
+
Reflexive comparisons, in which a baseline survey of participants is done before the intervention and a follow-up survey is done after. The baseline provides the comparison group, and impact is measured by the change in outcome indicators before and after the intervention.<ref name="M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh">M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh</ref><br/>
  
 
<br/>
 
<br/>
  
 
= Further Information =
 
= Further Information =
 +
 +
*[[Experimental Design|Experimental Design]]
 +
*[http://www.socialresearchmethods.net/kb/destypes.php Social Research Methods: Types of Designs]
  
 
<br/>
 
<br/>
Line 26: Line 39:
 
<references />
 
<references />
  
 +
[[Category:Questionnaires_/_Interviews]]
 
[[Category:Impacts]]
 
[[Category:Impacts]]

Latest revision as of 08:50, 20 July 2018

Overview

Quasi-Experiment: A quasi-experimental design is an empirical study, almost like an experimental design but without random assignment. Quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition but using some criterion other than random assignment (e.g., an eligibility cutoff mark). In some cases, the researcher may have control over assignment to treatment condition.[1]


Non-Experiment: the researcher cannot control, manipulate or alter the predictor variable or subjects, but instead, relies on interpretation, observation or interactions to come to a conclusion. Typically, this means the non-experimental researcher must rely on correlations, surveys or case studies, and cannot demonstrate a true cause-and-effect relationship. Non-experimental research tends to have a high level of external validity, meaning it can be generalized to a larger population.[2]


Matching Methods or Constructed Controls

Matching methods or constructed controls, in which one tries to pick an ideal comparison that matches the treatment group from a larger survey. The most widely used type of matching is propensity score matching, in which the comparison group is matched to the treatment group on the basis of a set of observed characteristics or by using the “propensity score” (predicted probability of participation given observed characteristics); the closer the propensity score, the better the match. A good comparison group comes from the same economic environment and was administered the same questionnaire by similarly trained interviewers as the treatment group.[3]


Instrumental Variables or Statistical Control Methods

Instrumental variables or statistical control methods, in which one uses one or more variables that matter to participation but not to outcomes given participation. This identifies the exogenous variation in outcomes attributable to the program, recognizing that its placement is not random but purposive. The “instrumental variables” are first used to predict program participation; then one sees how the outcome indicator varies with the predicted values.[3]


Reflexive Comparisons

Reflexive comparisons, in which a baseline survey of participants is done before the intervention and a follow-up survey is done after. The baseline provides the comparison group, and impact is measured by the change in outcome indicators before and after the intervention.[3]


Further Information


References

  1. Wikipedia: http://en.wikipedia.org/wiki/Quasi-experiment
  2. Education Portal: http://education-portal.com/academy/lesson/non-experimental-and-experimental-research-differences-advantages-disadvantages.html#lessonfckLR
  3. 3.0 3.1 3.2 M. Ravallion, The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation. The World Bank, 1999. https://bit.ly/2L7eGOh