The oversight team knew that using the READ assessment data to plan and differentiate instruction was critical to the program’s success. Mrs. Anderson decided to discuss the issue with the READ faculty at each school in an effort to understand what she could do to facilitate their use of the READ assessment data. Additionally, the E-Team planned to elaborate on the rubric so that subscores could be captured for various components of the rubric. These rubric subscores would be especially useful for analysis when the data are disaggregated by teacher use of READ in the classroom, student interaction in the classroom, and teacher use of READ student assessment data to plan and differentiate instruction. The revised rubric would be developed during the spring, piloted over the summer, and implemented during Year 2.
Finally, at the evaluation update at the end of the school year, Dr. Elm reported on the preliminary evaluation of long-term goals of the READ program. Student reading achievement was higher among students of teachers who used READ regularly and as intended, and the difference was statistically significant. Further, students of teachers who used the READ assessment data to tailor classroom instruction had higher reading test scores than students of teachers who did not use the READ assessment data, and again the difference was statistically significant.
Year 1 evaluation findings also indicated that not all teachers had bought into using READ with their students, especially the READ assessment component. The oversight team decided to share the evaluation findings with all teachers at a staff meeting in order to encourage them to use READ in their classroom. Prior to sharing the evaluation findings with teachers, Dr. Elm conducted an anonymous follow-up survey at the staff meeting in an effort to find out why some teachers chose to not use READ.
If your program design and evaluation were inclusive processes that involved stakeholders and participants from the start, it is more likely that your evaluation findings will be used for program improvement and accountability. Involving others in your program’s implementation encourages a shared sense of responsibility for the program as well as a shared investment in the program’s success. Hearing about a program at its very start and not again until an evaluation report is provided does not foster the ownership among staff, stakeholders, and participants that is needed for a successful program.
So, how do you make sure your evaluation report, along with all of your hard work and informative results, is not put
on a shelf to gather dust? Make evaluation a participatory process from understanding and defining the program in Step 1 to informing the program in Step 5.
Why do we evaluate?
To ensure that the programs we are using in our schools are beneficial to students; to make programs and projects better; and to learn more about what programs work well, for whom, and to what extent.
How do we increase the likelihood that evaluation results will be used?
Create the opportunity for evaluation to have an impact on programmatic decision-making.
How do we create this opportunity?
We can start by:
Embedding evaluation into our programs from the outset;
Communicating program findings frequently and regularly; and
Making the evaluation process participatory from start to finish.
Refining the Program’s Theory Your evaluation findings should also be used to refine your logic model. As mentioned earlier, the logic model is a living model and its underlying assumptions should be dynamic, changing as new information is learned. If the culture in which your program is implemented is a learning culture, using findings to improve the logic model is a natural. However, in other environments, it may not be as easy to apply your findings to logic model improvement. Regardless, if your program is to continue, you should keep its program logic model up-to-date.
An up-to-date logic model can facilitate future evaluation and serve as the cornerstone of your program. Your program’s theory and logic model should be part of the core documentation of your program and can be used to train new program participants, as well as to explain the program to parents, administrative staff, potential funders, and other stakeholders.
Take Action You have completed a lot of work. You have distributed your evaluation findings through letters, reports, meetings, and informal conversations. You have given presentations. So, what do you do now? How do you make sure that your information is used?
First, think about what changes you would like to see. Before you can attempt to persuade others to use your information, you need to figure out what you would like to happen. What changes would you like to see or what decisions do you think need to be made as a result of your information?
Second, think about what changes others might want. Learning how others would like the information to be used gives you more awareness of where they are coming from and more insight as to how they would best be motivated.
Next, take action. You have evidence from your evaluation, you have shared it with others, and you know what you want done. Ask for it! Find out who is in charge of making the changes you want and make sure they hear your findings and your recommendations. Give them a chance to process your suggestions. Then follow up. See Appendix C for more information on Interpreting, Reporting, Communicating, and Using Evaluation Results.
The READ oversight team felt that the logic model they created accurately portrayed the program. Yet, since it was clear from November that the home component could not be fully implemented, they wanted to highlight this on the logic model. The team decided to draw a box around the program as it was implemented, excluding the home component. Below the model, a note was provided indicating why the home component was not part of the existing implementation and that it was currently being piloted in one classroom. The oversight team hoped to understand more about the implementation of the home component, as well as the success of the home component, from examining results from the pilot classroom.
The oversight team also wanted to understand more about the strength of the relationship between classroom use of READ and state assessment scores and between use of READ assessment data for instructional planning and state assessment scores. It noted this on the logic model and asked the E-Team to investigate the linkages further in the second year of the evaluation.
Change Takes Time One final note: change takes time. We all want to see the impact of our efforts right away, but in most cases change does not happen quickly. Embedded evaluation allows you to show incremental findings as you strive to achieve your long-term goals, and can help you to set realistic expectations regarding the time it takes to observe change related to your indicators. If you plan to use your evaluation results to advocate program expansion or to secure funding, keep in mind that changing policy based on your findings also will take time. People need to process your evaluation findings, determine for themselves how the findings impact policy and practice, decide how to proceed based on your evidence, and then go through the appropriate process and get the proper approvals before you will see any change in policy from your evaluation findings. As mentioned earlier, including others throughout your program’s design and implementation can facilitate the change process. However, even with a participatory evaluation and positive findings, policy change will occur on its own time line.
The READ oversight team recommended that the READ program be offered to all students in the district. It also recommended that the program be incorporated into the regular curriculum. The team felt that the positive findings regarding test scores were strong enough that all students should have access to it.
However, since READ funding was still at the 50% level for the second year, the oversight team planned to work with Dr. Elm and the E-Team for another year in order to continue to refine the implementation of the program in the classroom and to further understand the success of the READ program with students. To do this, the team recommended that the second-year evaluation include student surveys and focus groups as data sources to address objectives related to student interaction and engagement in the classroom.
The oversight team decided to continue to advocate for the program’s expansion in the hope that it would be institutionalized soon.
Appendix A: Embedded Evaluation Illustration – READ* Program Snapshot The Reading Engagement for Achievement and Differentiation (READ) program is a districtwide initiative focused on improving student reading skills in Grades 3-5. READ uses an experimental evaluation design and theory-based, embedded evaluation methods.
*This example was created solely to illustrate how the principles in this guide could be applied in actual situations. The program, characters, schools, and school districts mentioned in the example are fictitious.
Step 1: Define the Program
Background For the past 5 years, reading scores in the Grovemont School District have been declining. The curriculum supervisor, Mrs. Anderson, has tried many strategies to improve reading skills. However, scores continue to decline. Mrs. Anderson has been searching for curricular and assessment materials that are better aligned with state reading standards and that provide ongoing standards-based assessment data. Mrs. Anderson found a program called READ (Reading Engagement for Achievement and Differentiation) that looked promising. After reviewing research on the program and documentation from the vendor as well as numerous discussions and interviews with other districts that had implemented the program, Mrs. Anderson and the district superintendent decided to present the READ program to the school board, in order to gain approval for funding the program for Grades 3-5.
At last month’s meeting, the school board voted to partially fund the READ program. Due to recent state budget cuts, the school board was only able to fund the program at 50% for 2 years. At the end of the 2 years, the board agreed to revisit its funding decision. The board required an evaluation report and presentation due in September of each year.
Before starting to plan the READ program, Mrs. Anderson invited one teacher from each of the district’s six elementary schools, the district reading coach, one of the district’s reading specialists, and the district technology coordinator to join the READ oversight team. This 10- member team was charged with planning the READ program and its evaluation. The team asked
an evaluator from the local university to conduct the READ evaluation and to attend oversight team meetings.
The Evaluation The oversight team asked the external evaluator, Dr. Elm, to help them plan the evaluation. Dr. Elm suggested that the oversight team build evaluation into its program as the team is designing it. By embedding evaluation into the program, information from the evaluation would be available to guide program implementation. Evaluation data would both drive program improvement and be the foundation for future decisions regarding whether the program should be continued, expanded, scaled down, or discontinued.
The oversight team members invited Dr. Elm to lead them through the process of building evaluation into their program planning. Dr. Elm explained that the first step is to gain a thorough understanding of the program. In doing this, Mrs. Anderson shared the materials she had already reviewed with the oversight team. In addition, the oversight team contacted four school districts that had used the READ program successfully in order to learn more about the program. To develop a thorough and shared understanding of the context in which the READ program would be implemented, the team reviewed the state’s reading standards, the district’s strategic plan, the district’s core learning goals and curriculum maps in reading, and the district’s technology plan. The team also examined reading grades and state reading assessment scores for the district as a whole, as well as by school, English Language Learner (ELL) status, and special education status for the past 5 years.