At this point in the evaluation design, Dr. Elm recommended that the READ oversight team create an evaluation subcommittee, named the E-Team, comprised of 3 to 5 members. The evaluation subcommittee was formed as a partnership and a liaison between the READ program staff and the external evaluator, and was tasked with helping to design the evaluation and with monitoring the evaluation findings shared by the READ external evaluator. Mrs. Anderson appointed two oversight committee members (the district reading coach and one of the district reading specialists) to the E- Team. She also asked the district supervisor for assessment and evaluation to serve on the E-Team and to be the primary internal contact for the READ external evaluator. Finally, she invited Dr. Elm to serve as the chair of the E-Team and to serve as the lead, external evaluator of the READ program. As the external evaluator, Dr. Elm would conduct the evaluation and share findings with the E-Team and oversight team. The four-member E-Team’s first task was to create the READ logic model.
This guide touches on the basics of logic models. Logic models can be simple or quite sophisticated, and can represent small projects as well as large systems. If you would like to know more about logic models or logic modeling, a few good resources are included in the section in Appendix C.
Why Is Understanding the Program Important? As stated earlier, understanding your program by defining your program’s theory is the most important step in program design and evaluation. The logic model that you create to depict your program’s theory is the foundation of your program and your evaluation. Once you have a draft logic model, you can share the draft with key program stakeholders, such as the funding agency (whether it be the state education agency, the district, the school board, or an external foundation, corporation, or government entity), district staff, teachers, and parents. Talking through your model with stakeholders and asking for feedback and input can help you improve your model as well as foster a sense of responsibility and ownership for the program. While your program may be wonderful in theory, it will take people to make it work. The more key stakeholders you can substantively involve in the logic model development process and the more key people who truly understand how your program is intended to work, the more likely you will succeed.
Using the program definition developed by the oversight team, the E-Team worked to create a logic model. The E-Team started with the long-term goals on the right side of the model. The E-Team listed the contextual conditions and resources on the left. Just to the right of the context, the E-Team listed the strategies and activities. Next, the E-Team used the oversight team’s assumptions to work through the early/short-term and intermediate objectives.
Finally, following and updating your logic model throughout your program’s operation, as well as recording the degree to which early (short-term) and intermediate objectives have been met, enable you to examine the fidelity with which your program is carried out and to monitor program implementation. Logic modeling as an exercise can facilitate program understanding, while the resulting logic model can be a powerful tool to communicate your program’s design and your program’s results to stakeholders. Stakeholders, including the funding agency, will want to know the extent to which their resources – time and money – were used effectively to improve student outcomes.
This is a reduced size of the full logic model for the READ program. Appendix A provides the full-size logic model in Figure 3: READ Logic Model.
STEP 2: PLAN – How Do I Plan the Evaluation?
What Questions Should I Ask to Shape the Evaluation? While many evaluations ill-advisedly begin with creating evaluation questions, the first step should always be understanding the program. How can you create important and informed evaluation questions until you have a solid understanding of the theory that underlies a program? Because you have already created a logic model during the process of understanding your program, generating your evaluation questions is a natural progression from the model.
Your evaluation questions should be open-ended. Avoid yes/no questions, as closed-ended responses limit the information you can obtain from your evaluation. Instead of asking “does my program work?” you might ask:
To what extent does the program work?
How does the program work?
In what ways does the program work?
For whom does the program work best?
Under what conditions does the program work best?
Evaluation questions tend to fall into three categories taken from your logic model: measuring the implementation of strategies and activities, identifying the progress toward short-term and intermediate objectives, and recognizing the achievement of long-term program goals. The following paragraphs will lead you through a process and some questions to consider while creating your evaluation questions.
At the next READ planning meeting, the E-Team shared the draft logic model with the full oversight team. Oversight team members reviewed the model and felt comfortable that it represented the assumptions and logic as they had agreed on at their last meeting. No changes were needed to the logic model at this time. Next, the E-Team and the oversight team used the logic model to develop evaluation questions for the READ program.
Evaluating Implementation of Activities and Strategies
How do you know if your program contributed toward achieving (or not achieving) its goals if you do not examine the implementation of its activities and strategies? It is important for your evaluation questions to address the program’s activities and strategies. Education does not take place in a controlled laboratory but rather in real-world settings, which require that you justify
why you believe the program strategies resulted in the measured outcomes. Your program’s underlying theory, represented by your logic model, shows the linkages between the strategies and activities and the goals. The evaluation of your program’s operation will set the stage to test your theory. And more importantly, asking evaluation questions about how your strategies and activities were applied can tell you the degree to which your program had the opportunity to be successful.
It is never a good idea to measure outcomes before assessing implementation. If you find down the road that your long-term goals were not met, is it because the program did not work or because key components of it were not applied properly or at all? Suppose you find that your long-term goals were successfully met. Do you have enough information to support that your program contributed to this success? It is a waste of resources to expend valuable time and money
evaluating program outcomes if important program components were never put into place. While you will likely want to create evaluation questions that are specific to your program’s activities and strategies, a fundamental evaluation question at this stage is: What is the fidelity with which program activities have been implemented?
Use your logic model to guide you as you create your evaluation questions.
Your questions regarding strategies and activities address the degree to which your program had the opportunity to be successful. Questions in this category may also address contextual conditions and resources.
Questions addressing your early and intermediate objectives are important in determining if your program is on track toward meeting its long-term goals.
Using each of the strategies and activities listed on the left-hand side of the logic model, the E-Team worked with the READ oversight team to develop evaluation questions. For each strategy or activity, they developed questions addressing whether the strategy or activity had been carried out, as well as questions addressing some contextual conditions and resources necessary for program implementation.
The READ E-Team and oversight team created six evaluation questions to assess READ strategies and activities:
Strategies and Activities Evaluation Questions
Interactive, standards-based classroom lessons (using the READ software with interactive classroom technologies and individual handheld mobile devices for each student).
To what extent did teachers have access to the necessary technology in the classroom to use READ in their instruction?
Standards-based reading assessments (Internet-based, formative assessments of student reading skills administered within the READ software).
To what extent were READ assessments made available to students and teachers? Examine overall, by school, and by grade level.
Standards-based reading homework (Internet-based using READ software).
To what extent did students have access to READ at home? Examine overall and by grade level, race, gender, and socioeconomic status.
Teacher professional development on integrating READ into classroom instruction (using an interactive wireless pad).
To what extent did teachers receive professional development on how to integrate READ into their classroom instruction?
Teacher professional development on using READ assessment data for classroom lesson planning.
To what extent did teachers receive professional development on how to incorporate READ assessment data into their classroom lesson planning?
Student training on using READ (in the classroom and at home).
To what extent were students trained in how to use READ?
Note: These questions are intended to evaluate the degree to which the program had the opportunity to be successful, as well as to determine if additional program supports are needed for successful implementation.