+1 (208) 254-6996 essayswallet@gmail.com

How Should I Organize the Data? During data collection, procedures should be put in place to protect privacy and to provide data security. For instance, if data can be tied to individual respondents, assign each respondent an identification number and store data according to that number. Often when data are collected at multiple times during the evaluation (e.g., pre and post) or when data sources need to be individually connected (e.g., student

demographic data and assessment data), a secondary data set can be created to match identification numbers with respondents. If this is the case, this secondary data set should be encrypted and kept highly confidential (i.e., stored in a locked office and not on a shared server), so that individual information cannot be accessed intentionally or inadvertently by others. It is also good practice to control and document who has access to raw evaluation data.

Don't use plagiarized sources. Get Your Custom Essay on
How Should I Organize the Data? During data collection, procedures should be put in place to protect privacy and to provide data security.
Just from $13/Page
Order Essay

Confidentiality and individual privacy are of primary importance during all aspects of the evaluation.

An evaluator should safeguard that private information is not divulged in conversations regarding the program; during data collection, organization, and storage; and through evaluation reporting.


You should also document your data sets. Having good documentation increases the credibility of your evaluation should questions be asked regarding your findings. It is sound practice to keep a record of what data were collected, when they were collected, and how respondents and other participants were chosen. This documentation also should include any definitions that might be necessary in order to interpret data, as well as interview protocols or survey instruments that were used. Documentation of data collected and how data were stored will be useful if you should want to reanalyze your data in the future, if someone asks you questions




about your data, or if someone would like to replicate your evaluation. See the Data Collection, Preparation, and Analysis section in Appendix C for resources on data preparation and creating codebooks to organize and document your data.

How Should I Analyze the Data? The purpose of analyzing your data is to convert all of the raw data that you have collected into something that is meaningful. Upon organizing your data, you may find that you are overwhelmed with the data you have available and wonder how you will make sense of it. Start with your logic model and evaluation questions. List the indicators and associated targets you have outlined for each evaluation question. Use what you have set up during your evaluation design to organize your analysis. Take each evaluation question one at a time, examine the data that pertain to the indicator(s) you have identified for the evaluation question, and compare the data collected to your targets.

Analyzing your data does not have to be daunting. Often when people think of data analysis, they assume complicated statistics must be involved. In reality, there are two things to keep in mind:

Not all data analysis involves statistics.

Even if statistics are involved, they should be at the level that the intended audience will understand.

Analysis methods differ by the type of data collected. If the information to be analyzed includes quantitative data, some type of statistical analysis will be necessary. The most common way statistics are used in evaluation is for descriptive purposes. For example, if you want to describe the number of hours students spent using a computer at home or at school, you would calculate either the average number or the percentage of students who use computers for a specified period of time. Or, you may want to compare the results of one group of students (e.g., at-risk students) to another group to see if technology influences different groups differently. In this case, you may want to use the same statistics (e.g., means and percentages), but report separate results by group.

You may also want to use a simple test of significance (e.g., t-test) to see if the differences in means are statistically significant (i.e., unlikely to differ by chance). Whether you use simple descriptive statistics or tests of significance and how you want to group your information depend on the type of information you have collected and your evaluation questions. For more complex data sets or in-depth analyses, more sophisticated statistical techniques, such as regression analysis, analysis of variance, multilevel modeling, factor analysis, and structural equation modeling can be used.





If the information to be analyzed involves qualitative data, such as data collected from open- ended survey questions, interviews, case studies, or observations, data analysis will likely involve one of two methods. The first is to develop a rubric to score your interview or observational data. Remember, if at all possible, the rubric should be developed in advance of data collection. Once data are scored using the rubric, you can use quantitative analyses to analyze the resulting numerical or categorical data.

A second method to analyze qualitative data is to create a protocol to aid you in data analysis. Such protocols typically call for an iterative process of identifying and understanding themes, organizing data by emerging themes, coding data by theme, and making assertions or conclusions based on these themes. Often, example responses or descriptions taken from the data are used to support the assertions. As with quantitative data, it is important when reporting qualitative data not to inadvertently reveal an individual’s identity. All assertions and findings should be “scrubbed” to be sure that someone reviewing the report cannot deductively identify evaluation participants. See Appendix C for more information on Data Collection, Preparation, and Analysis.

When developing a rubric to code qualitative data:

Decide on the number of variations or categories. (It works well to use four to five categories.)

Describe the best response in detail.

For each subsequent variation, describe what the response would look like. For example, on a scale of 1 to 4, the best response would be a 4. A variation with many but not all components of the best might be a 3. A variation with a few components of the best response might be a 2, while a variation with little to no components of the best response would be a 1.






The READ external evaluator collected a mix of quantitative and qualitative data to address evaluation questions. Qualitative data collected through observations and interviews were coded using the READ implementation rubric and analyzed using descriptive statistics, including means and frequency distributions. Student reading assessment data were analyzed by testing for statistical significance, comparing mean test scores between groups of students and over time. An example is provided below. The full READ Evaluation Matrix start in Appendix A, at Table 10: READ Evaluation Matrix—Strategies and Activities/Initial Implementation.

1. Logic Model Component: Improved integration of READ into classroom instruction (intermediate objective).

2. Evaluation Question: In what ways and to what extent did teachers integrate READ into their classroom instruction?

3. Indicator: Improved integration of READ lessons into classroom instruction.

4. Targets: By April, 50% of teachers will score a 3 or above (out of 4) on the READ implementation rubric. By June, 75% of teachers will score a 3 or above on the READ implementation rubric.

5. Data Source: READ implementation rubric (developed by the E-Team and administered by Dr. Elm).

6. Data Collection: Rubric completed through alternating, monthly classroom observations and teacher interviews.

7. Data Analysis: Rubric scores aggregated into frequency distributions and means; change over time to be analyzed.

All data collected through the evaluation were managed and stored by Dr. Elm, the external evaluator. The computer used for storage and analysis was located in a locked office. Only the external evaluator had access to the raw data. Data were backed up weekly to an external drive, which was kept in a locked drawer. To protect teacher and student privacy, identification numbers were assigned to all participants. Teacher and student names were not recorded with the data.

READ online records regarding student and teacher use, rubric data, and survey data were only accessible by the external evaluator. Results that were released were only in aggregation and had no identifying information. All evaluation data were secured and kept confidential to protect individual privacy.





Managing the Unexpected and Unintended Just as with life, sometimes the unexpected happens. Perhaps you find that you were unable to collect all the data you had outlined in your design. Or maybe the existing data that you were relying on are not accessible. Or the data were available but the quality was not as good as you had expected (e.g., too much missing information or recording errors). Or possibly you were unable to get enough program participants to respond to your survey or agree to an interview. Don’t panic. Go back to your evaluation questions. Reexamine your indicators and measures. Is there another measure that can be used for your indicator? Is there another indicator you can use to address your evaluation question? Think creatively about what data you might be able to access or collect. You may find that you are not able to answer a certain evaluation question or that your answer to that question will be delayed. Or you may find that you can answer your question, sort of, but not in the best way. In any case, document what happened, explain what alternatives you are pursuing, and simply do the best you can. Evaluation does not occur in a sterile laboratory but within the course of everyday practice. Your evaluation might be less than ideal at times and you will undoubtedly face challenges, but in the long run, some information is better than no information. See Appendix C for resources on Evaluation Pitfalls.



Order your essay today and save 10% with the discount code ESSAYHELP