Data collection requires evaluators to consider a wide and diverse variety of data sources. These include both primary and secondary data sources. Data collection can significantly impact your evaluation project. Evaluators sometimes overlook problems where secondary data is involved and overlook the fact that data for comparison purposes can affect validity. The types of data in the Learning Resources are not meant to be exhaustive but are the most common types in program evaluation.
To prepare for this Discussion, choose one of the cases provided, review the specified Learning Resources, and complete the corresponding prompts.
Case 1 Steven Levitt, the author of Freakonomics, is exemplary in his ability to make data tell “stories.” Students of public administration and organizational behavior are familiar with the famous Hawthorne effect. Levitt and List’s (2009) reanalysis of the original Hawthorne study (which uses incomplete, archived, and secondary data) shows how management programs that were developed to increase worker productivity can be evaluated or reevaluated, even though the experiments were conducted almost a century ago. Post by Day 3 a description of at least two data collection problems that the reanalysis of Levitt and List might bring to the Hawthorne effect. Respond by Day 5 to two colleagues and explain whether the problems have to do with design or data collection. What other data collection problems could, in your view, also be important? Explain.
Case 2 Barbara Geddes (1990) points out the pitfalls of selecting cases, units, or observations purely on the basis of the dependent variable.
Post a brief explanation of the relevance of Barbara Geddes’ argument for program evaluation. Then, explain when a scenario of choosing cases solely on the dependent variable might be permissible. Provide a rationale for your explanations.
•Langbein, L. (2012). Public program evaluation: A statistical guide (2nd ed.). Armonk, NY: ME Sharpe. ◦Chapter 7, “Designing Useful Surveys for Evaluation” (pp. 209–238)
•McDavid, J. C., Huse, I., & Hawthorn, L. R. L. (2013). Program evaluation and performance measurement: An introduction to practice (2nd ed.). Thousand Oaks, CA: Sage. ◦Chapter 4, “Measurement for Program Evaluation and Performance Monitoring” (pp. 145–185)
•Geddes, B. (1990). How the cases you choose affect the answers you get: Selection bias in comparative politics. Political Analysis, 2(1), 131–150. Retrieved from http://www.uky.edu/~clthyn2/PS671/Geddes_1990PA.pdf
•Levitt, S., & List, J. (2009). Was there really a Hawthorne effect at the Hawthorne plant? An analysis of the original illumination experiments. Retrieved from http://www.nber.org/papers/w15016.pdf
•Urban Institute. (2014). Outcome indicators project. Retrieved from http://www.urban.org/center/cnp/projects/outcomeindicators.cfm
•Bamberger, M. (2010). Reconstructuring baseline data for impact evaluation and results measurement. Retrieved from http://siteresources.worldbank.org/INTPOVERTY/Resources/335642-1276521901256/premnoteME4.pdf
•Parnaby, P. (2006). Evaluation through surveys [Blog post]. Retrieved from http://www.idea.org/blog/2006/04/01/evaluation-through-surveys/
•Rutgers, New Jersey Agricultural Experiment Station. (2014). Developing a survey instrument. Retrieved from http://njaes.rutgers.edu/evaluation/resources/survey-instrument.asp
•MEASURE Evaluation. (n.d.). Secondary analysis of data. Retrieved February 24, 2015, from http://www.cpc.unc.edu/measure/our-work/secondary-analysis/secondary-analysis-of-data
•Zeitlin, A. (2014). Sampling and sample size [PowerPoint slides]. Retrieved from http://www.povertyactionlab.org/sites/default/files/2.%20Sampling%20and%20Sample%20Size_AFZ3.pdf