+1 (208) 254-6996 essayswallet@gmail.com
  

STEP 4: INTERPRET – How Do I Interpret the Results?

How Do I Examine and Interpret My Results? In the end, the important part of collecting and analyzing your information is not the statistics or analytical technique but rather the conclusions you draw. The process of coming to a conclusion can vary from goal to goal and objective to objective. One of the most difficult tasks is defining vague goals and objectives, such as “sufficient training” or “adequate progress.” However, you have gone to great lengths to understand your program and plan your evaluation, and you have already developed targets for your indicators. Because of this, your interpretation of results will likely be more straightforward and less cumbersome.

Don't use plagiarized sources. Get Your Custom Essay on
STEP 4: INTERPRET – How Do I Interpret the Results?
Just from $13/Page
Order Essay

Examination of evaluation results should be ongoing. It is not wise to wait until the end of an evaluation to analyze your data and interpret your results. For instance, if your evaluation results from implementation reveal that program activities were not put into place, continuing with the measurement of short-term and intermediate objectives is likely a waste of your resources. Similarly, if the evaluation of intermediate objectives reveals that outcomes are not as envisioned, an important question would be whether the program should be modified, scaled back, or discontinued. Do results indicate that the program is not working as expected?

Or do results reveal that the program’s theory is invalid and needs to be revisited? Was the program implemented as planned? Is it reasonable to think that making a change in the program could improve results? These are important questions to consider before moving on to the measurement of progress toward long-term goals.

Examination of data and interpretation of findings should be ongoing. Do not wait until the end of the evaluation!

In order to use evaluation for program improvement, communication of findings should be regular, continuous, and timely.

61

 

 

 

 

The READ evaluation subcommittee, E-Team, examined the evaluation results and determined the following. Use of these findings will be discussed in Step 5.

Summative

1. First-year results indicate that state reading scores for READ students are higher than those for non-READ students. The gains are especially compelling for classrooms in which READ was used regularly and with fidelity, where increases in state reading scores were over three times that of non-READ students.

2. Students in classrooms where READ was used regularly and with fidelity increased their reading scores on the state assessment by twice that of students in READ classrooms where READ was used minimally.

3. Students of teachers who used READ assessment data as intended to differentiate instruction increased their reading scores on the state assessment by twice as much as students of teachers who did not use READ assessment data as intended.

4. Student scores on READ assessments had a significant and strong positive correlation with student scores on the state reading assessment, indicating the state reading and the READ assessments are likely well aligned and that READ assessment data are likely a good indicator of performance on the state reading assessment.

Formative

5. State reading assessment data could not be analyzed by home use of the READ program because only one classroom implemented the home component.

6. At the start of the year, teacher use of READ was promising and the program met its targets. However, as the program progressed and as more teachers were pressed to improve their use of READ, several targets were not met. READ student assessment data were not used as regularly by teachers as the classroom component of READ.

A full accounting of evaluation results by logic model component and evaluation question is provided in Appendix A, starting at Table 13: READ Evaluation Results—Strategies and Activities/Initial Implementation.

62

 

 

 

Interpretation should address the relationship between implementation and long-term goals. Presuming that your program was implemented and ongoing results were promising, to what extent did the program accomplish its long-term goals?

During interpretation, consider how the program worked for different groups of participants and under different conditions. You may also want to examine how long-term outcomes vary with implementation, as well as with results from short-term and intermediate indicators.

Results should be examined in relation to the proposed program’s theory. Do evaluation findings support the program’s theory? Were the assumptions underlying the program’s theory validated? If not, how did the program work differently from what you had proposed? How can the theory and the logic model representing this theory be changed to reflect how the program worked?

The logic model can be used as a tool to present evaluation findings, as well as to explain the relationships among components of the program. Updating the logic model to include results can be a useful reporting and dissemination tool.

Cautions During Interpretation Two common errors during results interpretation are overinterpretation and misinterpretation of results. Unless the evaluation design was a randomized, controlled experiment, results interpretation should not claim causal relationships. Indeed there may be relationships

between your program’s activities and its outcomes (and hopefully there will be!), but unless all rival explanations can be ruled out, causal associations cannot be claimed. Doing so would be an overinterpretation of your results.

When interpreting evaluation findings, be careful not to claim the data say more than they actually do!

Additionally, when interpreting results, you should consider possible alternative theories for your results. Considering and recognizing other explanations or contributors to your evaluation results does not diminish the significance of your findings but rather shows an understanding of the environment within which your program was implemented.

Over time, it is a combination of factors, some unrelated to the program itself, that interact to create results. Documenting your program’s environment can guard against misinterpretation of results and instead provide a thoughtful description of the circumstances under which the results were obtained. See Appendix C for more information on

63

Interpreting, Reporting, Communicating, and Using Evaluation Results.

 

 

 

Although the READ evaluation was a true experimental design, E-Team members knew it would still be worthwhile to consider the possibility that other factors might have influenced the positive findings. The E-Team therefore brainstormed possible competing explanations for the positive results of the READ program.

The E-Team decided that another plausible explanation for the positive results was that the teachers who used READ regularly in the classroom and who used READ assessments as intended may have been more skilled teachers and their students might have had a similar increase in reading scores even without the READ program. The E- Team decided to follow up on fidelity of implementation and its relationship to teacher skills. In addition, while classrooms were randomly assigned to READ to minimize initial differences between READ and non-READ classrooms, it is possible that by chance more skilled teachers were assigned to the READ program group. The E-Team also intends to investigate this issue further in Year 2 of the evaluation.

64

 

How Should I Communicate My Results? As mentioned earlier, evaluation findings should be communicated to program staff on an ongoing and regular basis. These formative findings are critical to program improvement. Setting a schedule for regular meetings between program staff and the evaluation team, as well as building these communications into your time line, will ensure that evaluation findings can

truly help the program during its operation. Evaluators can provide quick feedback at any stage of the program to help improve its implementation. For instance, if an evaluator notices from observing professional development sessions that teachers are leaving the

training early to attend another faculty meeting, the evaluator should give quick feedback to program staff that the timing of sessions may not be convenient (and for this reason, teachers are not receiving the full benefit of the training).

Setting up regular times throughout the program’s operation to share evaluation findings with program staff and other stakeholders is a key responsibility of the evaluator and critical to a program’s success.

Suppose the evaluator finds during the early stages of the program (through interviews or classroom observations) that teachers are struggling with the technology needed to use the program in the classroom. The evaluator can give quick feedback at a monthly meeting or through an email that technology support and technical assistance are needed in the classroom. Remember, however, an evaluator should not report on individual teachers or classrooms unless consent to do so has been obtained. Doing so could violate the ethical

 

 

 

obligation to participants in the evaluation and undermine future data collection efforts. Even quick feedback should maintain confidentiality.

In addition to relaying your findings on an ongoing basis for formative purposes, you will also want to communicate your summative evaluation findings regarding the extent of your program’s success to stakeholders, including administrators, school board members, parents, and funders. The first step to communicating your results is to determine your audience. If you have multiple audiences, (e.g., administrators and parents), you may want to consider multiple methods of reporting your findings, including reports, presentations, discussions, and short briefs. Make a list of (a) all people and organizations you intend to communicate your results to; and (b) any others you would like to know about your evaluation findings. For each audience, ask yourself these questions:

What background do they have regarding the program?

What will they want to know?

How much time and interest will they have?

What do you want the audience to know?

Thinking through these questions will help you tailor your communication. In general, if you are given guidelines on what to report by a funder or by the state or district, try to follow them as closely as you can. If you are not given guidelines, then put yourself in the position of your audience and consider what information you would like to know. Here are some tips to keep in mind:

If the audience already has background information on the program, try to focus on providing only specific findings from your evaluation. If your audience is not familiar with your program, you can use your program theory and logic model to introduce the program and provide a description of how the program is intended to work.

Address the goals and objectives that you believe the audience would most want to know about.

Order your essay today and save 10% with the discount code ESSAYHELP