Many programs are implemented as part of the school curriculum or as a districtwide or statewide initiative. In such cases, participants may by default participate in those programs as part of their education or work. However, if data are collected or used in the program evaluation, the participants have the right to consent or refuse to have their information used in the evaluation. In some situations and for some data, participants may have consented prior to the evaluation for their information to be used for various purposes, and their consent may extend to your evaluation. If you think this may be the case for your evaluation, be sure the evaluators verify it with your administration. In other instances, especially when data will be newly collected for the evaluation, the evaluator should obtain informed consent from participants before data collection begins. Depending upon the nature of your study and your institution, informed consent may be obtained through permission forms or through a formal human subjects review process.
53
As part of the evaluator’s responsibility to protect people, information obtained through and used by the evaluation must be kept confidential. Individual identities should be kept private and access to evaluation data should be limited to the evaluation team. Evaluators should protect privacy and ensure confidentiality by not attaching names to data and also by ensuring that individuals cannot be directly or deductively identified from evaluation findings. An exception to this may be case studies or evaluations that use student work as examples. For these evaluations, you should take care that your informed consents and written permissions explicitly state that participating individuals or organizations consent to being identified in evaluation reports, either by name or through examples used in the report.
Finally, you must be especially careful not to blur the lines between the two roles of program staff and evaluation team when it comes to privacy and confidentiality. This is one of the reasons that it is prudent to have the external evaluator on your evaluation team collect, manage, and analyze your data. If your data are particularly sensitive or if evaluation participants were promised complete confidentiality, using an external evaluator to handle all data collection and management needs would be the practical and pragmatic choice, as well as the ethical preference. See the Ethical Issues section in Appendix C for resources on ethical considerations and obligations of evaluation.
How Do I Collect the Data? Your data collection approach will depend upon your evaluation method. Table 3 includes an overview of data collection procedures for various evaluation methods.
Table 3: Evaluation Methods and Tools: Procedures
Methods and Tools
Procedures
Assessments and Tests
•
•
•
Review the test to be sure that what it measures is consistent with the outcomes you hope to affect.
Review the test manual to be sure the test has adequate reliability and validity. (See reliability and validity sidebars on pages 45 and 46 for more information.)
Be sure that test proctors are well trained in test administration.
Surveys and Questionnaires
•
•
•
•
Develop the survey questions or choose an existing survey that addresses your evaluation needs.
Pilot test the survey to uncover and correct problems with survey items and questions as well as to plan data analyses.
Decide in advance on a target response rate as well as the maximum number of times you will administer the survey or send the questionnaire.
Examine reliability and validity. (See reliability and validity sidebars on pages 45 and 46 for more information.)
54
Methods and Tools
Procedures
Interviews •
•
•
•
Develop an interview protocol, highlighting key questions.
Include question probes to gather more in-depth information.
Limit how long the interview takes so that participants will be more willing to participate (and make sure to tell participants how much time will be needed for the interview).
Obtain permission to digitally record so that you can concentrate on listening and asking questions. (The recording can be transcribed and analyzed after the interviews.)
Focus Groups •
•
•
•
•
As with an interview, develop a focus group protocol that includes key questions.
Limit group size. (Using six to eight participants tends to work well, though a skilled facilitator may be able to increase the size.)
Purposefully organize focus groups that include participants who can build upon and benefit from each other’s ideas, providing for a richer discourse.
Purposefully organize focus groups that include participants who will feel comfortable speaking their opinion in the group.
Obtain permission to digitally record so that you can concentrate on listening and asking questions. (The recording can be transcribed and analyzed after the focus groups.)
Observations •
•
•
Design the observation protocol and rubrics (if you will be analyzing data with rubrics). Remember to consider the environment and atmosphere, dispositions, pedagogy, curriculum, etc. when designing your protocol and rubrics.
Observers should try to be as unobtrusive as possible so as to not influence the environment they are observing.
See the rubrics row below in this table for pointers on design and consistency in scoring.
Existing Data • Review existing data for applicability and accuracy. Caution: Simply because data exist does not mean that they are complete or accurate.
Portfolios •
•
•
Choose artifacts to be included in the portfolio.
Design the scoring rubric in advance.
See the rubrics row below in this table for pointers on design and consistency in scoring.
Case Studies • Case studies might involve a combination of the above methods.
55
Methods and Tools
Procedures
Rubrics •
•
•
•
•
Design the scoring rubrics before examining the qualitative data.
Describe the best response or variation in detail.
Decide on the number of variations or categories. (It works well to use four or five.)
For each variation, describe in detail what the response or variation would look like. Typically, the best response is at the top of the scale. For example, on a scale of 1 to 4, the best response would be a 4. A variation with many but not all components of the best might be a 3. A variation with a few components of the best response might be a 2, while a variation with little to no components of the best response would be a 1.
Train raters or observers how to score using the rubric. Use several raters to score the same responses, observations, or student work using the rubric. Compare scores to examine inter-rater reliability. Discuss scoring among raters to improve consistency.