All data collection processes have strengths and limitations. Be mindful of these as you plan your analysis, as they may influence your interpretations. Also be sure to note strengths and limitations when reporting on the results (especially for external audiences).
Possible strengths

  • Field-tested instruments were used
  • Instruments were developed with input from staff and/or participants (folks close to the work)
  • Instruments were pilot tested
  • Data were collected at multiple time points
  • You have a high response / participation rate (>75%)
  • The sample of participants in the data set is representative of your overall target population
  • Instruments were developed using your Theory of Change (i.e., the data explicitly speak to outcomes that are central to the program)
  • Multiple methods were used
  • Open-ended questions unpack quantitative data, eliciting rich qualitative data to explain the “why” and “how”
  • Quantitative data were analyzed using statistical methods
  • Program data are compared with a benchmark standard (if available)
Possible limitations

  • Instruments were developed in isolation (without input from staff and/or participants)
  • Instruments were not pilot tested
  • Data are only available for one point in time
  • You have a low response rate
  • Instruments are not tied directly to your Theory of Change
  • No qualitative data to contextualize or explain quantitative results
  • Quantitative analyses cannot include significance testing due to limitations in sample size and/or staff capacity
  • The sample of participants in the dataset are not representative of the program’s target population
  • Data collection systematically excluded some participants (e.g., those with limited transportation options, low literacy levels, no internet access, etc.)
  • Self-reported answers may have “social desirability” bias (people say what they think will be viewed favorably)
  • Non-response bias (if those who answer differ from those who don’t, or one sub-group of participants didn’t respond in large numbers to a particular question)
  • Voluntary response bias (i.e., folks who have strong opinions tend to be more likely to participate)
  • To reduce this bias, consider offering small incentives, such as pizza at a focus group or a gift card to survey respondents; however, large incentives could be coercive