A primary purpose of evaluation is to make summative decisions. You can use summative evaluation results from rigorous evaluations to make final, outcome-related decisions about whether a program should be funded or whether program funding should be changed. Summative decisions include whether to continue, expand, or discontinue a program based on evaluation findings.
Another important purpose of evaluation is to make formative decisions. You can use formative evaluation data from rigorous evaluations to improve your program while it is in operation. Formative evaluation examines the implementation process, as well as outcomes measured throughout program implementation, in order to make decisions about midcourse adjustments, technical assistance, or professional development that may be needed, as well as to document your program’s implementation so that educators in other classrooms, schools, or districts can learn from your program’s evaluation.
Who Should Do the Evaluation? Once you have decided to evaluate the implementation and effectiveness of a program, the next step is to determine who should conduct the evaluation. An evaluation can be conducted by someone internal to your organization or someone external to your organization. However, the ideal arrangement is a partnership between the two, i.e., forming an evaluation team that includes both an internal and an external evaluator.
Preferably, evaluation is a partnership between staff internal to your organization assigned to the evaluation and an experienced, external evaluator.
Such a partnership will ensure that the evaluation provides the information you need for program improvement and decision-making. It also can build evaluation capacity within your organization.
An internal evaluator may be someone at the school building, district office, or state level. For evaluations that focus on program improvement
and effectiveness, having an internal evaluator on your evaluation team can foster a deeper understanding of the context in which the program operates. Involving people inside your organization also helps to build capacity within your school or district to conduct evaluation. An internal evaluator should be someone who is in a position to be objective regarding program strengths and weaknesses. For this reason, choosing an internal evaluator who is responsible for the program’s success is not recommended and may compromise the evaluation. In order to maintain objectivity, an internal evaluator should be external to the program. However, while staff internal to the program itself should not be part of the evaluation team, they should certainly partner with the evaluation team in order to ensure that the evaluation informs the program during every phase of implementation.
It is good practice to have an external evaluator be part of your evaluation team. Using an external evaluator as a “critical friend” provides you with an extra set of eyes and a fresh perspective from which to review your design and results. Professional evaluators are trained in the design of evaluations to improve usability of the findings, and they are skilled in data
collection techniques such as survey design, focus group facilitation, conducting interviews, choosing quality assessments, and performing observations. An experienced evaluator can also help you analyze and interpret your data, as well as guide you in the use of your results. Further, when you are very close to the program being evaluated, objectivity or perceived objectivity may suffer.
The choice of who conducts your evaluation should depend upon the anticipated use of the results and the intended audience, as well as your available resources.
For some programs, while an external evaluator is preferred, funding an evaluator who is external to your organization may not be feasible. In such cases, partnering with an evaluator who is internal to your organization, yet external to your program, might work well.
Other potentially affordable evaluation options include putting out a call to individuals with evaluation experience within your community who might be willing to donate time to your program; contacting a local university or community college regarding faculty or staff with evaluation experience who might work with you at a reduced rate; asking your local university if there is a doctoral student in evaluation who is looking for a research opportunity or dissertation project; or exploring grant opportunities that fund evaluation activities.
Partnering with an external evaluator can improve the credibility of the findings, as some may question whether an evaluator internal to an organization can have the objectivity to recognize areas for improvement and to report results that might be unfavorable to the program. For some programs, you may choose to use an evaluator who is external to your organization to be the sole or
primary evaluator. An external evaluator may be a researcher or professor from your local university or a professional evaluator from a private evaluation firm.
The choice of who conducts your evaluation should depend upon the anticipated use of the results and the intended audience, as well as your available resources. If evaluation results are to be used with current or potential funding agencies to foster support and assistance, contracting with an external evaluator would be your most prudent choice. If the evaluation is primarily intended for use by your
organization in order to improve programs and understand impact, an evaluation team comprised of an internal and an external evaluator may be preferred. Connecting with someone external to your organization to assist with the evaluation and results interpretation will likely enhance the usability of your evaluation and the credibility of your evaluation findings. Evaluation as a partnership between an internal evaluator and an external evaluator is the ideal arrangement to ensure the utility of the evaluation and its results.
The focus of embedded evaluation is to enable educators to build and implement high-quality programs that are continuously improving, as well as for educators to know when to discontinue programs that are not working.
For some programs, while an external evaluator might be preferred, funding an evaluator who is external to your organization may not be feasible. In such cases, partnering with an evaluator who is internal to your organization, yet external to your program, might work well. For instance, staff from a curriculum and instruction office implementing a program might partner with staff from another office within the district, such as an assessment or evaluation office, to conduct the evaluation.
If resources are not available for an external evaluator and there is no office or department in your organization that is not affected by your program, you may want to consider other potentially affordable evaluation options. You could put out a call to individuals with evaluation experience within your community who might be willing to donate time to your program, contact a local university or community college regarding faculty or staff with evaluation experience who might work with you at a reduced rate, ask your local university if there is a doctoral student in evaluation who is looking for a research opportunity or dissertation project, or explore grant opportunities that fund evaluation activities.
What Is Embedded Evaluation? The embedded evaluation approach presented in this guide is one of many approaches that can be taken when conducting an evaluation. Embedded evaluation combines elements from several approaches, including theory-based evaluation, logic modeling, stakeholder evaluation,
and utilization-focused evaluation. See Appendix C: Evaluation Resources for resources with additional information on evaluation approaches.
Further, it is important to note that evaluation is not a linear process. While the steps of embedded evaluation may appear as if they are linear rungs on a ladder culminating with the final step, they are not rigid steps. Rather, embedded evaluation steps build on each other and depend upon decisions made in prior steps, and information learned in one step may lead to refinement in a previous step. The steps of embedded evaluation are components of the evaluation process that impact and influence each other. What you learn or decide in one step may prompt you to return to a previous step for modifications and improvements. Just as programs are ongoing, evaluation is dynamic.
The dynamic nature of evaluation and the interconnectedness of an embedded evaluation with the program itself may seem amiss to researchers who prefer to wait until a predefined time to divulge findings. And inarguably, having a program stay its course without midcourse refinements and improvements would make cross-site comparisons and replication easier. However, embedded
evaluation is built upon the principle of continuous program improvement. With embedded evaluation, as information is gathered and lessons are learned, the program is improved. The focus of embedded evaluation is to enable educators to build and implement high-quality programs that are continuously improving, as well as to determine when programs are not working and need to be discontinued. The overall purpose of designing a rigorous, embedded evaluation is to aid educators in providing an effective education for students.
Evaluation is a dynamic process. While embedded evaluation leads the evaluator through a stepped process, these steps are not meant to be items on a checklist. Information learned in one step may lead to refinement in a previous step. The steps of embedded evaluation are components of the evaluation process that impact and influence each other.
Where Do I Start? Just as the first step in solving a problem is to understand the problem, the first step in conducting an evaluation is to understand what you want to evaluate. For the purposes of this guide, what you want to evaluate is referred to as the “program.” It is important to note that the term program is used broadly in this guide to represent small interventions, classroom-based projects, schoolwide programs, and districtwide or statewide initiatives.
The first step in evaluation is to understand what it is you want to evaluate.
You can use the evaluation process that is presented in this guide to define and evaluate a small project, as well as to understand and evaluate the inner workings of large programs and initiatives. Regardless of the size or type of program, understanding the program is not only the first step in evaluation. It also is the most important step. Defining why your program should work and making the theory that underlies your program explicit lay the foundation upon which you can accomplish program improvement and measure program effectiveness.
How Is the Guide Organized?
Steps to Embed Evaluation Into the Program This guide presents a framework to aid you in embedding evaluation into your program planning, design, and decision-making. You will be led step-by-step from documenting how and why your program works to using your evaluation results (see Figure 1: Embedded Evaluation Model). The framework is based on the following five steps:
STEP 1: DEFINE – What is the program?
STEP 2: PLAN – How do I plan the evaluation?
STEP 3: IMPLEMENT – How do I evaluate the program?
STEP 4: INTERPRET – How do I interpret the results?
STEP 5: INFORM (a) and REFINE (b) – How do I use the results?
Throughout the guide, the boxed notes highlight important evaluation ideas. As mentioned earlier,
I notes provide excerpts from Appendix A: Embedded Evaluation Illustration – READ* to illustrate the process of designing an evaluation from understanding the program to using results, and
R notes indicate that additional resources on a topic are included in Appendix C: Evaluation Resources.
Appendices Appendices A and B provide examples of theory-driven, embedded evaluations of two programs that involve infusing technology into the curriculum in order to meet teaching and learning goals. These examples are provided solely for the purpose of illustrating how the principles in this guide can be applied in actual situations. The programs, characters, schools, and school districts mentioned in the examples are fictitious. The examples include methods and tools to aid you as you build evaluation into your programs and projects and become an informed, active partner with the evaluator.
The illustration in Appendix A: Embedded Evaluation Illustration – READ* is of a districtwide reading program that uses technology to improve literacy outcomes and to assess reading progress. The illustration in Appendix B: Embedded Evaluation Illustration – NowPLAN* focuses on a building-level evaluation of a statewide strategic technology plan. This example builds evaluation into the everyday practice of educators in order to improve instruction and monitor strategic planning components.