If the audience wants information immediately, write a short summary of major findings and follow up with a longer, more detailed report.
Don’t rely on the typical end-of-year evaluation report to communicate evaluation findings. Communicate to multiple audiences using multiple methods.
In addition to regularly sharing evaluation findings with program staff, let other stakeholders know on an ongoing basis how the program is doing. Think creatively about modes of communication that will reach all stakeholders.
65
• Don’t be afraid to include recommendations or identify possible areas for change. Recommendations are a critical piece to making sure your evaluation findings are used appropriately. If you want to make changes, you are going to have to talk about it sooner or later, and having it in the report is a good way to start the conversation.
Finally, a long report is not the only way to communicate results. It is one way and perhaps the most traditional way, but there are many other methods available. Other options include:
•
•
•
•
•
•
•
•
A memo or letter;
A special newsletter or policy brief;
A conference call or individual phone call;
A presentation before a board or committee, or at a conference;
A publication in a journal, newspaper, or magazine;
A workshop;
A web page or blog; or
The school district newsletter or website.
Evaluation reports or presentations typically have a common format. First is an executive summary or overview that notes key findings. In fact, some will read only the executive summary, so you want to be sure it has the most important information. Other report sections might include:
•
•
•
•
•
•
Introduction (including program background and theory);
Evaluation design (including logic model, evaluation questions, and evaluation methods);
Results (including all findings from the evaluation, organized by evaluation question);
Conclusions (including your interpretation of the results);
Recommendations (including how the program should proceed based on your findings); or
Limitations (including limitations based on evaluation design, analysis of data, and interpretation of findings).
See Appendix C for more information on Interpreting, Reporting, Communicating, and Using Evaluation Results.
66
The READ oversight team met monthly to discuss program monitoring and improvement. At each meeting, the READ evaluator, Dr. Elm, and the E-Team provided an update to the oversight team. Based on the formative evaluation findings, the oversight team developed recommendations and a plan for the next month.
At the December school board meeting, the oversight team presented a status report, noting important findings from the evaluation. The oversight team asked Dr. Elm to create a full evaluation report for the administration and to present the findings at the August school board meeting. The E-Team also drafted a one-page brief of evaluation findings which was provided to all participants, as well as to the local newspaper.
67
STEP 5: INFORM and REFINE – How Do I Use the Evaluation Results?
Informing for Program Improvement One of the most important uses of evaluation findings is for program improvement. In fact, for many audiences, your evaluation communication should focus on improvement. In order to do
this, evaluation communication and reporting should include not only positive findings but also findings that may not be flattering to your program. These not-so-positive findings are the basis for program improvement.
When using evaluation results, ask yourself whether your findings are what you expected.
Has the program accomplished what was intended? If yes, do you see areas where it can be made even better? If no, why do you think the program was not as successful as anticipated? Did the program not have enough time to be successful? Was the implementation delayed or flawed? Or perhaps the program theory was not correct. In any case, using evaluation results is vital to improve your program.
Be sure to report both positive and negative findings. Negative findings can be communicated as lessons learned or areas for improvement.
68
Informing for Accountability Another important use of evaluation findings is for accountability purposes. Designing and implementing programs take valuable resources, and your evaluation findings can help you determine whether the expenditure is worth the results.
Accountability pertains to basic questions, such as whether the program was indeed implemented and whether program funding was faithfully spent on the program, and to more involved questions, such as whether the program is a sound investment. For this reason, as with program improvement communications, it is important for your evaluation reporting to include all findings, good and bad, so that informed decisions can be made regarding the program’s future. Should the program be continued or expanded? Should it be scaled back? While evaluation reporting can be used for program marketing or for encouraging new funding, evaluation findings should include sufficient information for decisions regarding accountability. A caution, however, is that decisions regarding accountability should be made carefully and be based on evidence from multiple sources derived from a rigorous evaluation.
During her evaluation update at the November oversight team meeting, Dr. Elm shared initial findings from the evaluation of the implementation of READ program activities. Indicators showed that many students did not have the technology available at home to access READ. Even within those schools that had high numbers of students with the technology necessary for home access, the classroom variability was large. Only one of the 40 classrooms was able to have 100 percent of students access READ from home. Open-ended survey items revealed that teachers did not feel comfortable offering READ homework assignments to some but not all students in their classroom and therefore chose not to train students in the home use of READ. Only one teacher had trained his students in the home use of READ because all of his students had the technology at home necessary to access READ. This teacher indicated that he would like to continue with the home component of READ.
The oversight team discussed the home-component issue and asked for advice from the E-Team on how to proceed. With the support of the E-Team, the oversight team decided to have a one classroom pilot of the home component but otherwise to remove the home component from the program during Year 1. Based on results from the pilot, implementing a partial home component in Year 2 would be considered.
During the same November update, Dr. Elm provided some findings from the evaluation of the early/short-term objectives on the READ logic model. She noted that in October all teachers had reported using READ in their classroom and that over half of teachers reported that they had used READ every week. However, over one-quarter of teachers reported that they had used READ in their classroom only once or twice in the last month. Survey data indicated that some of these teachers felt overwhelmed with the technology and some said they could not fit READ classroom use into their already busy day.
The oversight team discussed this information and decided to make a midcourse adjustment. Before the READ program began, team members had thought that the initial professional development and ongoing technical assistance would be sufficient. However, they now believed that they needed to make one-to-one professional development available to those teachers who would like to have someone come into their classroom and model a lesson using READ. Mrs. Anderson assigned arrangements for this one-on-one professional development to one of the oversight team members.
69
During her evaluation update at the January oversight team meeting, Dr. Elm shared findings from the evaluation of the intermediate objectives on the READ logic model. Dr. Elm explained that on the December teacher survey, slightly less than half the teachers reported that they used the READ assessment data on a weekly basis for planning and differentiating instruction. One in 10 teachers said they had never used the READ assessment data. Dr. Elm further stated that the lack of use of the READ assessment data was likely affecting scores on the READ implementation rubric. From classroom observations, interviews, and surveys, she believed that the quality of teacher use of READ in the classroom was progressing nicely but that the lack of assessment data use was decreasing the overall rubric score.