Analysing data to summarise it and look for patterns is an important part of every evaluation. The options for doing this have been grouped into two categories: quantitative data and qualitative data.
The entered data has to be first of all checked for both correctness (spot checks whether entered data corresponds to the information on the questionnaire sheet) and consistency.
Depending on which software has been used to enter the data, the data may then need to be transferred into specific statistical software package (e.g. from Excel to STATA). A commonly used software package for both data entry and analysis is SPSS. For simple unconditioned analytical purposes, Excel is, however, often enough.
These basic purposes comprise:
- frequencies (numbers, a count of how many),
- percent distributions (proportion),
- means (average),
- medians (mid-point),
- modes (the most frequent value),
- money (costs, revenues, expenditures),
- percent change over two points in time and ratios.
While there is some risk of bias in working with qualitative data in particular, the application of statistics to analyze quantitative data does not protect from providing biased results either. Orienting data analysis too much along positive impact hypotheses (“Fuel Savings”, “Better Conditions …”) hampers interpretations most consistent with the data. The ultimate guidance should therefore be to let the data instead of the researcher tell the story.
Caution in the concrete data analysis process is, in particular, always advised in relation to the population used, i.e. the group that is made reference to (“… among female adults”, “of households having bought an improved stove during the last two years.”). Both choosing the logically appropriate population and technically applying the correct data filtering and manipulation may be sources of miscalculations.
- ↑ Better Evaluation: http://betterevaluation.org/plan/describe/look_for_patterns
- ↑ 2.0 2.1 2.2 2.3 RWI (2009)