+1 (208) 254-6996 essayswallet@gmail.com
  

Methods and Tools Basic Information Advantages Disadvantages

Portfolios Primarily a qualitative method Can be captured and stored

Don't use plagiarized sources. Get Your Custom Essay on
Methods and Tools Basic Information Advantages Disadvantages
Just from $13/Page
Order Essay

electronically

Can provide a representative cross-section of work

If portfolio work is used pre-program and post-program, data can be used to examine growth.

Scoring of qualitative work is often subjective.

Objectivity of results relies on strength of scoring rubric and training of scorers. So, reliability and validity should be considered.

Case Studies Primarily a qualitative method Can include both qualitative

and quantitative data

Can include a mixture of many methods, including interviews, observations, existing data, etc.

Provides a multi-method approach to evaluation

Often allows a more in-depth examination of implementation and change than other methods

Analyses of data can be subjective Expensive to conduct and analyze; as a result, sample sizes are often small

Rubrics Quantitative method Guidelines to objectively

examine and score subjective data such as observations, portfolios, open-ended survey responses, student work, etc.

See Rubrics sidebar on page 47 for more information.

Powerful method to examine variations of program implementation

Well-defined rubrics can be used not only for evaluation purposes but also to facilitate program implementation.

Objectivity of results relies on strength of scoring rubric and training of scorers

 

 

 

 

50

 

 

 

Constraints

All programs have constraints during their implementation. Constraints might be contextual in that you may not have the support needed to fully evaluate your program. Or you may have

resource constraints, including financial or time constraints. Feasibility is important to consider while designing your evaluation.

A good evaluation must be doable. The design for a rigorous, comprehensive evaluation may look great on paper, but do you have the time available and

financial resources necessary to implement the evaluation? Do you have adequate organizational and logistical support to conduct the evaluation the way you have planned?

Feasibility is important to remember when planning your evaluation. A good evaluation must be doable. Remember, a small evaluation is better than no evaluation, because basing program decisions on some information is better than basing decisions on no information. Considering the feasibility of carrying out your evaluation is critical when planning your evaluation. Be sure to plan within your organizational constraints.

Every evaluation has constraints, and if you do not consider them at the outset, your thoughtfully planned evaluation may be sidelined to no evaluation. Remember, a small evaluation is better than no evaluation, because basing program decisions on some information is better than basing decisions on no information. Considering the feasibility of carrying out your evaluation is critical to planning your evaluation. Be sure to plan within your organizational constraints.

You can also use your logic model to represent your evaluation time line and evaluation budget. The time frame of when and how often you should measure your short-term, intermediate, and long-term objectives can be noted directly on the logic model, either next to the headings of each or within each objective. Likewise, the cost associated with data collection and analysis can be recorded by objective.

By examining time line and budget by objective, evaluation activities that are particularly labor intensive or expensive can be clearly noted and planned for throughout the program’s implementation and evaluation. The Budgeting Time and Money section in Appendix C includes several resources that may help you with considerations when budgeting time and money for an evaluation.

51

 

 

 

 

The READ E-Team decided on data collection methods, including the data sources, for each evaluation question and associated indicators. Two examples are provided below.

1. In what ways and to what extent did teachers integrate READ into their classroom instruction?

A READ rubric will be used to measure teacher implementation of READ in the classroom.

The rubric will be completed through classroom observations and teacher interviews.

The READ implementation rubric will be on a 4-point scale, with a 4 representing the best implementation.

Data will be collected monthly, alternating between classroom observations one month and interviews the following month.

2. To what extent did READ improve student learning in reading?

The state reading assessment will be used to measure student learning in reading. It is administered in April of each academic year, beginning in second grade.

READ assessment data will be used as a formative measure to examine student reading performance.

State reading scores and READ assessment data will be disaggregated and examined by quality of teacher use (using the READ implementation rubric), frequency of home use, initial reading performance, grade level, gender, ethnicity, special education status, and English language proficiency.

Previous year state reading assessment scores will be used as a baseline against which to measure student reading improvement.

Reading scores on the state assessment will be analyzed in relation to scores on the READ assessments in order to determine the degree to which READ assessments correlate with the state reading assessment.

For a full list of evaluation questions, data sources, and data collection methods, see the READ Evaluation Matrix tables 10, 11, and 12 in Appendix A, Step 3: Implement the Evaluation. The READ Evaluation Matrix includes the READ logic model components, evaluation questions, indicators, targets, data sources, and data collection methods by the READ logic model strategies and activities, early/intermediate objectives, and long-term goals. The data analysis column in the READ Evaluation Matrix will be completed in Step 3.

52

 

 

 

STEP 3: IMPLEMENT – How Do I Evaluate the Program?

Ethical Issues Because evaluation deals with human beings, ethical issues must be considered. Evaluation is a type of research—evaluators research and study a program to determine how and to what extent it works. You likely have people (perhaps teachers or students) participating in the program, people leading the program, people overseeing the program, and people relying on the program to make a difference. It is the responsibility of the evaluator to protect people during evaluation activities. An evaluator must be honest, never keeping the truth from or lying

to participants. You should be clear about the purpose of the program and its evaluation. Respect for participants always comes before evaluation needs.

Prior to collecting any data, check with your administration to see what policies and procedures are in place for conducting evaluations. Is there an

Institutional Review Board (IRB) at the state, district, or school level that must be consulted prior to conducting an evaluation? Does your state, district, or school have formal Human Subjects Review procedures that must be followed? Does the evaluator need to obtain approvals or collect permission forms? Policies and procedures to safeguard study participants must be followed and permissions must be received before any data are collected. For resources on federal requirements regarding Institutional Review Boards or the protection of human subjects in research, see the

Policies and procedures regarding informed consent and ethics to safeguard study participants must be followed before any data are collected.

Ethical Issues section in Appendix C.

Order your essay today and save 10% with the discount code ESSAYHELP