+1 (208) 254-6996 [email protected]
  

Read the following assessment article.  Choose one article (see below) to read and complete a summary reflection. In your own words explain what you have learned from the article. Tell how the article will help guide your instruction in the future?  Has the article left you with any questions? Please cite the article source in APA format.  (The article must be one-page in length using Times New Roman , 12 font, and double spaced.)

For APA citation style guide please click the link below:

Don't use plagiarized sources. Get Your Custom Essay on
Assessment Article Reflection
Just from $13/Page
Order Essay
https://lib.usm.edu/help/tutorials/

Articles on Student Assessment

  1. Bundock, K., O’Keeffe, B.V., Stokes, K., & Kladis, K. (2018). Strategies for minimizing variability in progress monitoring or oral reading fluency. Teaching Exceptional Children, 50(5), 273-281.Strategies for Minimizing Variability.pdf
  2. Lindstrom, E. R., Gesel, S.A., & Lemons, C.J. (2019). Data-based individualization in reading: Tips for successful implementation. Intervention in School and Clinic, 55(2), 113-119.Data-based Individualization in reading.pdf
  3. Powell, S. R. & Stecker, P.M. (2014). Using data-based individualization to intensify mathematics intervention for students with disabilities. Teaching Exceptional Children, 46(4), 31-37. Using Data-based Individualization to Intensify Mathematics.pdf

764097TCXXXX10.1177/0040059918764097Council for Exceptional ChildrenTEACHING Exceptional Children research-article2018

Strategies for Minimizing Variability in

Progress Monitoring of Oral Reading Fluency

Kaitlin Bundock, Breda V. O’Keeffe, Kristen Stokes, and Kristin Kladis

Progress Monitoring T

E A

C H

IN G

E xc

ep ti

on a l C h il d re

n ,

V o l. 5

0, N

o .

5, p

p .

27 3 –

28 1.

C o p yr

ig h t

20 18

T h e

A u th

o r(

s) . D

O I:

1 0.

11 77

/0 04

00 59

91 87

64 09

7https://doi.org/10.1177/0040059918764097http://crossmark.crossref.org/dialog/?doi=10.1177%2F0040059918764097&domain=pdf&date_stamp=2018-03-26

274 CounCil for ExCEptional ChildrEn

Mr. Long is a special education teacher in an urban school district. Three times per year, he uses Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Next to assess his students’ oral reading fluency (ORF) skills at their chronological grade level. Mr. Long conducts weekly progress monitoring of all students who score below the expected benchmark score for words read correctly per minute (WCPM). Students are assessed at either their grade level, if they are reading at or above 50 WCPM according to the DIBELS Next progress- monitoring guidelines (Dynamic Measurement Group, 2012), or at their instructional level based on results from a survey level assessment. To conduct the assessments, Mr. Long takes students out of the classroom during various times of the day. Depending on the time of day, Mr. Long uses different setting locations, including the hallway, a conference room, and an unused classroom. Students are taken individually or in small groups, depending on how far away he must take them for the assessment.

After a few weeks, Mr. Long notices one of the students, Laine, has inconsistent scores in her data set (see Figure 1). Laine, a third-grade student with a specific learning disability, had scores of 61, 43, 75, and 57 WCPM over 4 weeks. Mr. Long compares Laine’s scores with those of other students in the group and notices the other students’ scores are more consistent. For example, Mason’s scores are 66, 71, 71, and 72 WCPM during the same time period (see Figure 2). Mr. Long consults with the school’s reading specialist and finds out that “high variability” includes a range of 10 or more words read correctly above or below the trend line. Because Mr. Long has graphed Laine’s data with a trend line characterizing the data, he can quickly determine that her data are highly variable. Mr. Long realizes that highly variable data can obscure what Laine’s true progress might be. He sees the need to collect more data to determine if the variability can be reduced before a good decision about changing her intervention can be made.

CBM is useful and effective for monitoring student progress in important skills, such as reading, mathematics, and writing. Research has shown that (a) CBM can be easily implemented and interpreted by teachers (e.g., Fuchs, Deno, & Mirkin, 1984), (b) student outcomes have improved when teachers use CBM to inform instructional decision making (e.g., Fuchs, Fuchs, Hamlett, & Stecker, 1991), (c) reliable and valid measures have been developed that predict important student outcomes (e.g., Fuchs, Fuchs, & Maxwell, 1988; Kim, Petscher, Schatschneider, & Foorman, 2010; Wayman, Wallace, Wiley, Tichá, & Espin, 2007), and (d) CBM can be an integral component of multi-tiered systems for identifying and monitoring students’ academic needs (e.g., Kovaleski, VanDerHeyden, & Shapiro, 2013; M. R. Shinn, 2007). CBM for reading (CBM-R) is an efficient and effective research-based progress- monitoring tool to monitor student growth in reading and to evaluate the effectiveness of targeted instruction (Good et al., 2011; Hosp, Hosp, & Howell, 2016). CBM-R is easy to administer and requires minimal resources, such as time and materials. Furthermore, the feedback teachers receive from administering CBM-R can inform instructional decision making and provide critical data about

individual student progress toward reading goals. Given the utility of CBM-R, it is widely used as a key data source for instructional and eligibility decision making (Ardoin, Christ, Morena, Cormier, & Klingbeil, 2013).

The most commonly used CBM-R is ORF (CBM ORF). CBM ORF is a research-based, standardized assessment of connected text that is administered to individual students. CBM ORF is a good indicator of a student’s current skill level and predictor of future reading performance (Deno, Fuchs, & Marston, 2001; Fuchs, Fuchs, Hosp, & Jenkins, 2001; Kim et al., 2010). CBM ORF requires the student to use a variety of different literacy skills, such as decoding, vocabulary, and comprehension (Hosp et al., 2016). CBM ORF originated in the 1970s, when practitioners randomly selected passages from the curriculum materials used in the classroom (e.g., Deno, 1985; Deno, Marston, Shinn, & Tindal, 1983). This practice increased the utility and validity of the measure for making instructional decisions; however, researchers found that student performance on passages within a grade level varied substantially, decreasing the reliability of these measures (see Hintze & Christ, 2004). Later iterations of CBM ORF included development of passages equated based on readability formulae (e.g., Aimsweb;

Figure 1. Laine, third-grade student curriculum-based measurement oral reading fluency, high variability to moderate variability

TEACHING ExCEptional ChildrEn | May/JunE 2018 275

M. M. Shinn & Shinn, 2002; DIBELS, 6th ed.; Good & Kaminski, 2002). Unfortunately, student performance on these passages continued to be excessively variable within grade levels (e.g., Poncy, Skinner, & Axtell, 2005). Excessive variability makes the data difficult to interpret, and therefore, recommendations for instructional modifications become unclear.

Currently, CBM ORF passages have been written using readability formulae for initial equating, then field-tested with students to choose the most equivalent passages to include in published sets (e.g., DIBELS Next; Good et al., 2011; easyCBM; Alonzo, Tindal, Ulmer, & Glasgow, 2006; FastBridge; Christ & Colleagues, 2015). Although some researchers have found persistent variability among these more modern passages (Cummings, Park, & Schaper, 2013), those studies were conducted with higher-performing students than those who are typically included in progress monitoring (e.g., students scoring at or above benchmark at screening). Other researchers found that when passages are implemented as intended, such as to progress monitor students below or well below benchmark, acceptably low levels of variability are seen (O’Keeffe, Bundock, Kladis, Yan, & Nelson, 2017;

Tindal, Nese, Stevens, & Alonso, 2016). Given the challenges presented by excessive variability, educators should be aware of possible sources of

variability and have strategies to prevent and address variability in CBM ORF progress monitoring. These strategies should be followed in addition to the recommendations from the specific publisher of the CBM ORF in use and from general recommendations for implementing and interpreting CBM (e.g., Hosp et al., 2016).

Indicators of Excessive Variability in CBM ORF Progress Monitoring

Educators need to determine how much variability is too much when evaluating student progress-monitoring data. Typically, educators evaluate progress-monitoring data using time

series graphs, with words read correctly on each measurement occasion graphed over time. When educators use visual analysis to determine if a student is making adequate progress or not, multiple graphical components can affect this decision. For example, the amount of variability and the degree of slope in the data can make evaluation decisions more or less accurate, with higher variability and lower slope making decisions substantially less accurate (Nelson, Van Norman, & Christ, 2017; Ottenbacher, 1990; Van Norman & Christ, 2016). If inaccurate decisions are made based on variable data, students who need a change in intervention may not receive it, whereas students who do not need a change may experience an unneeded change in intervention. For CBM ORF, researchers have suggested that very low variability exists when most (i.e., 2/3) of the data points fall within five correctly read words per minute (five above and five below) of a trend line,

and acceptable variability exists when most of the data points fall within 10 correctly read words per minute (10 above and 10 below) of a trend line (Christ, Zopluoglu, Monaghen, & Van Norman, 2013). These values are based on ranges across grade levels (e.g., Christ & Silberglitt, 2007); therefore, students who read more slowly would have lower limits of variability that are acceptable. If available through an electronic database (e.g., AimswebPlus; Pearson, 2017), researchers recommend making these determinations based on confidence intervals, which are generated statistically with the student data (Christ & Silberglitt, 2007). Values that fall outside these ranges can be considered extreme values, which can

Figure 2. Mason, third-grade student curriculum-based measurement oral reading fluency, very low variability

If inaccurate decisions are made based on variable data, students who need a change in intervention may not receive it, whereas students who do not need a change may experience an unneeded change in intervention.

276 CounCil for ExCEptional ChildrEn

affect the interpretation of the data. Although variability will always be a part of assessments using multiple forms—as are used for CBM ORF— educators can improve their decision making by preventing variability as much as possible, identifying excessive variability, understanding how it affects data interpretation, and taking steps to minimize the impact of the variability on decision making.

Sources of Variability

Three primary sources account for the majority of the variability in CBM ORF progress monitoring. These sources include passage-level, student-level, and setting factors.

Passage Factors Contributing to Variability

Variability that is attributable to passage-level factors has decreased over time. Publishers of CBM ORF have taken steps to reduce passage variability, starting with the inclusion of carefully written passages to counter variability that resulted from teacher- selected passages (Good & Kaminski, 2002; Pearson, 2017; M. M. Shinn & Shinn, 2002) and, more recently, including the use of readability formulae, field-testing, and statistical equating to decrease passage variability (Ardoin & Christ, 2009; Christ & Ardoin, 2009; Poncy et al., 2005; Powell-Smith, Good, & Atkins, 2010). In spite of these actions, some passage- level variability remains (Briggs, 2011; O’Keeffe et al., 2017). In particular, research has indicated that there are differences in difficulty level between narrative and expository passages at the same reading level, but there is not a consensus regarding which type of passage tends to be more difficult (Briggs, 2011; O’Keeffe et al., 2017). Publishers continue to include both

narrative and expository passages to increase the validity of their passage sets for monitoring progress toward important goals, which would presumably include the ability to read and comprehend narrative and expository text. The variability between passage types may be explained by passage features that contribute to variability that currently are not captured by readability formulae, such as the repetition in phrasing found in the DIBELS Next progress-monitoring expository passage “Amazing Dolphins,” in which the first three sentences are all formatted as questions starting with “Can you . . .” or “Could you . . .” (O’Keeffe et al., 2017). These features may contribute to

students’ reading this expository passage faster than other passages at this grade level, even though previous research indicated that expository passages tended to be more difficult (Briggs, 2011). Given the evidence of continued variability found among passages types, practitioners should take steps to minimize variability in other ways.

Student Factors Contributing to Variability

Student-level factors also contribute to variability of CBM ORF scores. Many studies evaluating variability of CBM ORF have included participants who score at or above benchmark proficiency (Ardoin & Christ, 2009; Betts, Pickart, & Heistad, 2009; Briggs, 2011; Christ & Ardoin, 2009; Francis et al., 2008; Hintze & Christ, 2004; Poncy et al., 2005). These studies, including higher-performing students, found higher rates of variability than studies conducted only with students who scored below benchmark proficiency (O’Keeffe et al., 2017; Powell-Smith et al., 2010; Tindal et al.,

2016). Students who score below benchmark are the typical population of students who receive progress monitoring (unless there are other concerns), whereas all students across the range of high to lower skills should receive benchmark assessments (Good et al., 2011). Therefore, decisions about variability in progress-monitoring passages should be made based on research with the target population (i.e., students who scored below benchmark). In addition, although it may be common for practitioners to attribute variability in CBM ORF scores to student factors, such as interest in the passage, excitement due to an upcoming holiday, or student mood, these factors have not been found to influence variability beyond passage-level variability (Briggs, 2011). Practitioners need to have an accurate sense of the factors that contribute to variability at the student level so they can take steps to best control this variability.

Setting Factors Contributing to Variability

Studies have found that setting factors, including where assessments are administered, who administers assessments, and the procedures used to administer assessments, contribute to variability in CBM ORF scores. A study that compared two assessment administrators and three assessment settings found significant differences among students’ correct words per minute based on who administered the assessments as well as where the assessments were administered (Derr- Minneci & Shapiro, 1992). Additionally, variability in students’ scores has been attributed to the degree to which assessment administrators follow standardized procedures (Reed & Sturges, 2012). Among a group of trained assessment administrators, 8% of assessments were found to have uncorrectable abnormalities, including forgetting to set a timer, allowing students to continue after a timer went off, not adhering to scripted procedures, forgetting to administer a passage, and providing unscripted encouragement to students (Reed &

Three primary sources account for the majority of the variability in CBM ORF progress monitoring. These sources include passage-level, student- level, and setting factors.

TEACHING ExCEptional ChildrEn | May/JunE 2018 277

Sturges, 2012). Additionally, 91% of assessments had correctable mistakes, including miscounting the number of errors, counting inserted words as errors, and miscalculating the words correct per minute (Reed & Sturges, 2012). Even among trained assessors, it is common for adherence to procedures to diminish over time if periodic refresher trainings are not provided (Reed & Sturges, 2012). Due to the sensitivity of CBM ORF to setting factors, such as administrator characteristics, environment, and procedures, practitioners should ensure fidelity of assessment administration (Christ & Silberglitt, 2007) and provide a consistent setting for assessments.

Recommendations for Minimizing Variability

Educators can reduce variability related to passage-, student-, and setting- related factors.

Passages

Because CBM ORF typically involves the use of connected text to present a coherent story or describe a specific topic, some variability is to be expected. To minimize the variability due to passage differences, educators should use the most recently updated, published passage sets (e.g., DIBELS Next; Good et al., 2011; easyCBM; Alonzo et al., 2006; FastBridge; Christ and Colleagues, 2015). Choose passage sets that have been written specifically for the purpose of assessment using grade-level guidelines (i.e., not chosen randomly from

instructional materials) and have been field-tested with students and chosen based on minimal variability in actual student performance (i.e., not just leveled with readability scores). To date,

there is no published research indicating which current, published probe sets have more or less error. In addition, educators should implement the passages in the order that they were published within each grade level. For example, if progress-monitoring passages are numbered 1 to 20, educators should administer the passages in that order. Authors have typically ordered the passages to distribute any remaining variability evenly across the passages (e.g., Powell-Smith et al., 2010). To increase the external validity of the assessment, passage sets often include both narrative (i.e., story-based) and expository passages (i.e., content area texts, such as science, social studies, history). Research has shown that there are often differences in reading performance across these types of passages, but some research has shown that narrative passages are easier than expository (e.g., Briggs, 2011), whereas other research has found some expository passages to be easier than narrative passages (e.g., O’Keeffe et al., 2017). No clear recommendations exist

about how to minimize variability for these types of passage differences, but educators should be aware that these differences may occur in progress monitoring so they can track trends in

students’ scores related to type of passage if high variability is observed. Tables 1 and 2 are checklists of recommendations to reduce variability in CBM ORF progress monitoring before and during assessment administration.

Student

Although the selection of students to progress monitor and the level of materials to use should be based on the types of decisions to be made (i.e., evaluating effects of interventions vs. evaluating grade-level proficiency), we recommend that schools select students for progress monitoring who score below or well below established benchmarks at screening. For example, DIBELS’ authors suggest that students who score below the benchmark goal on one or more benchmark assessment measures should receive progress monitoring (Good et al., 2011). Typically, these students require more intensive and individualized instructional interventions in one or more skill areas. Frequent progress

Table 1. Before Assessment Checklist of Recommendations for Reducing Variability in CBM ORF Progress Monitoring

•• Use most recently updated, published progress-monitoring passage sets.

•• Choose passage sets written for assessment, field-tested with students to establish equivalence, with minimal variability in student performance.

•• Choose students for progress monitoring who score below or well below publisher benchmarks at screening.

•• Use a survey-level assessment or “testing back” to determine correct instructional grade level for progress monitoring, as recommended by publisher of progress-monitoring system.

•• Train all individuals who will administer CBM ORF assessments.  Include training on administration and scoring procedures.  Include opportunities to practice with feedback.

Note. CBM = curriculum-based measurement; ORF = oral reading fluency.

Practitioners need to have an accurate sense of the factors that contribute to variability at the student level so they can take steps to best control this variability.

278 CounCil for ExCEptional ChildrEn

monitoring is recommended to ensure these students are making adequate progress toward reading goals. Student data from progress monitoring will allow educators to make decisions on whether to increase, decrease, or modify reading interventions.

After identifying students to monitor, the next step is to determine the instructional level at which to progress monitor each student. DIBELS’ and Aimsweb’s authors recommend using a survey-level assessment or “testing back” until the correct instructional grade level is determined (Good et al., 2011; Pearson, 2017). A survey-level assessment is a tool that can be used to determine a student’s instructional level and identify an appropriate grade level to progress monitor a student and an applicable fluency goal. For example, if a student is unable to meet the grade-level benchmark, educators can administer a survey-level assessment to identify an appropriate instructional level to progress monitor the student, which may be lower than the student’s chronological grade level. If educators progress monitor a student on a different grade level, they should still administer on-grade-level assessments at least three times per year during benchmark screening assessments. Assessing a student on-grade level three times per year will help determine if generalization from

intervention to grade-level standards is occurring. In addition, for students with disabilities, benchmark assessments allow the educator to determine if the student is making progress in grade-level standards.

Setting

Strategies for reducing setting-related variability include ensuring consistent and accurate administration and scoring within appropriate settings.

Assessors. Ensuring fidelity of assessment procedures and scoring practices is an essential element of progress monitoring. Consistent implementation of progress-monitoring methods and scoring procedures across the school is necessary because the data are used in making both low- stakes and high-stakes decisions. All individuals who will administer CBM ORF assessments need to be trained in the procedures, regardless of the CBM ORF program used. In addition, review of training should occur at regular intervals (e.g., annually) to prevent drift from accepted procedures. Training should include administration and scoring procedures and include

opportunities to practice administering and scoring assessment measures with feedback. In addition to providing training, we recommend self-checking assessment accuracy or having a schoolwide system for regularly

monitoring and giving feedback on assessment accuracy. School personnel can use assessment accuracy checklists provided by the CBM ORF publisher (e.g., for DIBELS Next, Good et al., 2011, pp. 113–120) or created by school personnel. The principal or reading specialist in the school could administer fidelity checks at each grade level throughout the school year. The fidelity checks should be used as learning tools for educators and contribute to their professional development.

Location of assessment. Assessors should administer progress-monitoring measures consistently. Most research recommends that CBM ORF measures be administered multiple times per week (Shapiro, 2012). Studies have demonstrated that inconsistencies in CBM ORF administration, including where passages are administered and

Table 2. During Assessment Checklist of Recommendations for Reducing Variability in CBM ORF Progress Monitoring

•• Implement passages in their published order within each grade level.

•• Administer CBM ORF measures in a consistent, quiet location with few distractions or disruptions.  Avoid administering in high-traffic or noisy locations.

•• Be aware of differences in difficulty between narrative and expository passages.  Indicate which passages are narrative or expository on individual students’ graphs if scores are highly variable.

•• If monitoring off grade level, administer on-grade-level assessments at least three times per year during benchmark screening assessments.

•• Use a strong set of data to determine aimlines.  Administer three passages and use the median score to establish the baseline data point for the aimline.

•• Monitor assessment accuracy at each grade level throughout the year.  Use self-checks or establish a schoolwide system for regular monitoring and feedback on assessment accuracy.  Use assessment accuracy checklists provided by CBM ORF publisher (if available), or create accuracy checklists for the

school.

Ensuring fidelity of assessment procedures and scoring practices is an essential element of progress monitoring.

TEACHING ExCEptional ChildrEn | May/JunE 2018 279

the extent to which the standardized directions are followed, can influence variability in students’ scores (Derr- Minneci & Shapiro, 1992). Administer CBM ORF measures in a consistent, quiet location with few distractions or disruptions. Examples of appropriate places to administer ORF measures include quiet, low-traffic areas of the library or classroom; private offices (e.g., counselor or principal’s office); or resource rooms. Avoid administering CBM ORF passages in high-traffic or noisy locations, such as hallways or busy classrooms.

Recommendations for Data Displays and Interpretation

Once data have been collected using the highest-quality assessment passages according to standardized instructions and procedures and in a consistent, quiet location, educators can graph and interpret the CBM ORF data in ways that minimize the impact of some variability. Recent research studies suggest that better decisions for individual students were made when educators (a) used graphical supports, such as a trend line and a goal line for comparison of progress (e.g., Van Norman & Christ, 2016; Van Norman, Nelson, Shin, & Christ, 2013); (b) collected data for longer periods of time in the presence of variability and

low slope (e.g., 12–14 weeks of once- weekly measures, as opposed to 6 weeks of once-weekly measures in the presence of low variability and higher slope, such as growth of at least 1.5 WCPM per week; Christ et al., 2013; Van Norman & Christ, 2016); (c) received training on graph and data interpretation, such as identifying and removing extreme values that can skew a trend line (Nelson et al., 2017); and (d) used visual analysis procedures (e.g., comparing trend line with goal line in the context of the data) rather than decision rules (e.g., 3 data points above or below the line; Van Norman & Christ 2016). It is important to note that educators may need more advanced training to conduct some of these procedures (i.e., treatment of extreme values) to ensure that accuracy of measurement is preserved. In addition, some researchers have suggested that collecting more probes each week (e.g., three passages vs. one passage) can reduce variability (Christ et al., 2013). This may not be a practical solution for all settings. However, when data are used for high- stakes decisions, such as eligibility for special education services, it is imperative that the data used for these decisions be accurate to ensure students receive appropriate services and supports. These procedures improve educator decision making with

CBM ORF in general but are particularly important in the presence of increased variability and low-to- moderate slope (see Table 3 for a checklist of recommendations for data display and interpretation).

Mr. Long considers three possible sources of variability: passage-level factors, student-level factors, and setting factors. To control for passage-level factors, he reviews the DIBELS Next technical manual and verifies that the passages were field-tested with students to reduce variability, and he is using the correct instructional-level passages. In consideration of student-level factors, Mr. Long verifies that Laine scored below benchmark on the recent screening assessment. He also talks with other applicable school staff about Laine. Her behavior in school has remained consistent—her motivation to learn is high, and she enjoys spending time learning to read. Mr. Long has also checked in with her parents and has confirmed that nothing unusual is happening at home. Mr. Long then addresses variability due to setting-level factors. After a discussion with the reading specialist and some fidelity checks, Mr. Long confirms that he is adhering to the standardized administration protocols: He reads the scripted directions, starts and stops the timer when required, and checks his

Table 3. Recommendations for Data Display and Interpretation for CBM ORF Progress Monitoring

•• Include trend line and goal line on individual graphs to aid in visual analysis.  Evaluate progress by comparing trend line and goal line.

•• Use visual analysis instead of decision rules (e.g., 3 points above or below the goal line for making changes).  Visual analysis of data includes evaluation of slope, trend, stability, level, and immediacy of effects.

•• Pursue training on graph and data interpretation.  Training may be needed to aid in identifying and addressing extreme values that skew a trend line.  If available, generate and report confidence intervals based on student data to note high and low variability more

accurately.

•• Collect more data (e.g., 12–14 weeks) when the data are highly variable or the slope is low (e.g., per-week growth of 0.50–1.0 correctly read words per minute).

•• Consider collecting more probes each week (e.g., three passages per week) to decrease variability and make a decision sooner (e.g., 8–10 weeks).

•• If variability remains despite actions taken to reduce it, consider administering additional academic and behavioral assessments, and evaluate contextual factors (e.g., auditory or visual supports, motivation, etc.).

280 CounCil for ExCEptional ChildrEn

scoring prior to graphing the data. Mr. Long uses the classroom next door when giving most students the progress- monitoring assessments. However, he often must take Laine to the hallway to complete the probe(s) because there is no other available space close to the classroom during the time Laine is able to take the assessment. Sometimes other groups of students are walking to art or physical education classes during Laine’s assessment. Although Laine is used to reading in the hallway on the floor, the added distractions on some occasions may be contributing to variability in her scores. Mr. Long works with other teachers to find a less distracting location (i.e., an unused office in the library) to administer the assessment. After this change in setting, Mr. Long monitors Laine’s data (see Figure 1). Laine’s data have moderate levels of variability. On the basis of more consistent data, Mr. Long determines that an instructional change needs to be made for Laine because she is not meeting the aimline he set for her based on grade-level benchmarks.

Conclusion

Using CBM ORF progress-monitoring measures is an effective way to consistently evaluate students’ reading performance and make data-based instructional decisions. However, educators should understand and properly control for factors that may influence variability. Working with other professionals at their schools, educators should ensure that data are collected and analyzed following the recommendations presented in this article as well as the recommendations from the publisher of the CBM ORF measure to maximize the use of CBM ORF. It is also important to recognize that variable data are not necessarily inaccurate. If these recommended strategies do not reduce variability adequately, we recommend that educators administer additional assessments and assess contextual variables (e.g., student’s use of visual and auditory supports, level of focus, motivation) to obtain a more complete

picture of the student’s academic and behavioral performance.

References

Alonzo, J., Tindal, G., Ulmer, K., & Glasgow, A. (2006). easyCBM® online progress monitoring assessment system. Eugene: University of Oregon, Behavioral Research and Teaching.

Ardoin, S. P., & Christ, T. J. (2009). Curriculum-based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38, 266–283.

Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systemic review and summarization of the recommendations and research surrounding curriculum- based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1–18. doi:10.1016/j.jsp.2012.09.004

Betts, J., Pickart, M., & Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. Journal of School Psychology, 47, 1–17. doi:10.1016/j.jsp.2008.09.001

Briggs, R. N. (2011). Investigating variability in student performance on DIBELS oral reading fluency third grade progress monitoring probes: Possible contributing factors (Doctoral dissertation). Retrieved from ProQuest database. (UMI No. 3466319)

Christ, T. J., & Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology, 47, 55–75. doi:10.1016/j. jsp.2008.09.004

Christ, T. J., & Colleagues. (2015). Formative Assessment System for Teachers: Abbreviated technical manual, Version 2.0. Minneapolis, MN: Author and FastBridge Learning.

Christ, T.J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36(1), 130–146.

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration and dataset quality on progress monitoring outcomes.

Journal of School Psychology, 51, 19–57. doi:10.1016/j.jsp.2012.11.001

Cummings, K. D., Park, Y., & Schaper, H. A. B. (2013). Form effects on DIBELS Next oral reading fluency progress- monitoring passages. Assessment for Effective Intervention, 38, 91–104. doi:10.1177/1534508412447010

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232. doi:10.1177/001440298505200303

Deno, S. L., Fuchs, L. S., & Marston, D. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507–524.

Deno, S. L., Marston, D., Shinn, M., & Tindal, G. (1983). Oral reading fluency: A simple datum for scaling reading disability. Topics in Learning and Learning Disabilities, 2(4), 53–59.

Derr-Minneci, T. F., & Shapiro, E. S. (1992). Validating curriculum- based measurement in reading from a behavioral perspective. School Psychology Quarterly, 7, 2–16.

Dynamic Measurement Group. (2012). Progress monitoring with DIBELS Next®. Eugene, OR: Author.

Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students’ oral reading fluency using DIBELS. Journal of School Psychology, 46, 315–342. doi:10.1016/j. jsp.2007.06.003

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement and student awareness of learning. American Education Research Journal, 21, 449–460. doi:10.2307/1162454

Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculum based measurement and consultation on teacher planning and student achievement in mathematics operations. American Education Research Journal, 28, 617–641. doi:10.3102/00028312028003617

Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256. doi:10.1207/ S1532799XSSR0503_3

TEACHING ExCEptional ChildrEn | May/JunE 2018 281

Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal measures of reading comprehension. Remedial and Special Education, 9, 20– 28. doi:10.1177/074193258800900206

Good, R. H., III, & Kaminski, R. A. (2002). Dynamic Indicators of Basic Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.

Good, R. H., III, Kaminski, R. A., Cummings, K., Dufour-Martel, C., Petersen, K., . . . Wallin, J. (2011). DIBELS Next assessment manual. Eugene, OR: Dynamic Measurement Group.

Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33, 204–217.

Hosp, M. K., Hosp, J. L., & Howell, K. W. (2016). The ABCs of CBM: A practical guide to curriculum-based measurement (2nd ed.). New York, NY: Guilford Press.

Kim, Y., Petscher, Y., Schatschneider, C., & Foorman, B. (2010). Does growth rate in oral reading fluency matter in predicting reading comprehension achievement? Journal of Educational Psychology, 102, 652–667.

Kovaleski, J. F., VanDerHeyden, A. M., & Shapiro, E. S. (2013). The RTI approach to evaluating learning disabilities. New York, NY: Guilford Press.

Nelson, P. M., Van Norman, E. R., & Christ, T. J. (2017). Visual analysis among novices: Training and trend lines as graphic aids. Contemporary School Psychology, 21, 93–102. doi:10.1007/ s40688-016-0107-9

O’Keeffe, B. V., Bundock, K., Kladis, K. L., Yan, R., & Nelson, K. (2017). Variability in DIBELS Next progress monitoring measures for students at risk for reading difficulties. Remedial and Special Education, 38, 272–283. doi:10.1177/0741932517713310

Ottenbacher, K. J. (1990). Visual inspection of single-subject data: An empirical

analysis. Mental Retardation, 28, 283–290.

Pearson. (2017). AimswebPlus progress monitoring guide. Bloomington, MN: NCS Pearson.

Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute using curriculum based measurement. Journal of Psychoeducational Assessment, 23, 226–238. doi:10.1177/073428290502300403

Powell-Smith, K. A., Good, R. H., & Atkins, T. (2010). DIBELS Next oral reading fluency readability study (Tech. Rep. No. 7). Eugene, OR: Dynamic Measurement Group.

Reed, D. K., & Sturges, K. M. (2012). An examination of assessment fidelity in the administration and interpretation of reading tests. Remedial and Special Education, 34, 259–268. doi:10.1177/0741932512464580

Shapiro, E. S. (2012). Commentary on progress monitoring with CBM-R and decision making: Problems found and looking for solutions. Journal of School Psychology, 51, 59–66. doi:10.1016/j. jsp.2012.11.003

Shinn, M. M., & Shinn, M. R. (2002). AIMSweb training workbook: Administration and scoring of reading curriculum-based measurement (R-CBM) for use in general outcome measurement. New York, NY: Pearson.

Shinn, M. R. (2007). Identifying students at risk, monitoring performance, and determining eligibility within response to intervention: Research on educational need and benefit from academic intervention. School Psychology Review, 36, 601–617.

Tindal, G., Nese, J. F. T., Stevens, J. J., & Alonso, J. (2016). Growth on oral reading fluency measures as a function of special education and measurement sufficiency. Remedial and Special Education, 37, 28–40. doi:10.1177/0741932515590234

Van Norman, E. R., & Christ, T. J. (2016). How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules. Journal of School Psychology, 58, 41–55. doi:10.1016/j. jsp.2016.07.003

Van Norman, E. R., Nelson, P. M., Shin, J., & Christ, T. J. (2013). An evaluation of the effects of graphic aids in improving decision accuracy in a continuous treatment design. Journal of Behavioral Education, 22, 283–301. doi:10.1007/ s10864-013-9176-2

Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41, 85–120. doi:10.11 77/00224669070410020401

Kaitlin Bundock, Assistant Professor, Department of Special Education and Rehabilitation, Utah State University, Logan; Breda V. O’Keeffe, Assistant Professor, Department of Special Education, Kristen Stokes, Doctoral Student, Department of Special Education, and Kristin Kladis, Doctoral Candidate, Department of Special Education, University of Utah, Salt Lake City.

Address correspondence concerning this article to Kaitlin Bundock, PhD, Utah State University, 2865 Old Main Hill, Logan, UT 84322-2865 (e-mail: [email protected] edu).

The development of this article was supported in part by a grant from the College of Education at the University of Utah. Opinions expressed herein are the authors’ and do not necessarily reflect the position of the College of Education at the University of Utah, and such endorsements should not be inferred.

TEACHING Exceptional Children, Vol. 50, No. 5, pp. 273–281. Copyright 2018 The Author(s).

Copyright of Teaching Exceptional Children is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.

https://doi.org/10.1177/1053451219837634

Intervention in School and Clinic 2019, Vol. 55(2) 113 –119 © Hammill Institute on Disabilities 2019

Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1177/1053451219837634

isc.sagepub.com

What Works for Me Melinda Leko, Associate Editor

Despite decades of advancements in research, legislation, and practice intended to improve academic outcomes for struggling students in the United States, an estimated 2.5 million students require intensive academic interventions (Danielson & Rosenquist, 2014). Data-based individualiza- tion (DBI) is an approach that may benefit students with disabilities and their typical peers who require additional, intensive supports to meet their academic goals. With DBI, teachers use regularly collected data from curriculum-based measurement (CBM; Deno, 1985) to evaluate students’ response to intervention and make decisions about intensi- fying instruction. DBI may be implemented within the most intensive tier(s) of an existing response to intervention (RTI) framework and may be incorporated into special edu- cation if services are provided outside of the RTI frame- work. This article outlines how to enact DBI using recommendations from Lemons, Kearns, and Davidson (2014) to provide high-quality, intensive reading instruction to struggling first graders, offering reflections and recom- mendations on implementation.

Data-Based Individualization

The National Center on Intensive Intervention (NCII; intensiveintervention.org) defines DBI as a research-based

framework for providing intensive instruction to students with severe and persistent academic and/or behavioral needs (see Figure 1). At its core, DBI depends upon a validated intervention program implemented with fidelity. Teachers set an appropriate goal for a student and monitor progress weekly using carefully selected measures. Ideal progress- monitoring tools are (a) linked to instruction, (b) sensitive to growth, and (c) easy to administer. If the student demon- strates lack of RTI, usually defined by four data points below the goal line (Fuchs, Fuchs, & Vaughn, 2014), the teacher conducts a diagnostic assessment to determine specific areas of need. With this information, the teacher can make system- atic adaptations to the curriculum, adjusting content and/or delivery to target the student’s needs. Quantitative adapta- tions (e.g., more frequent intervention sessions, smaller groups) may be the first step to individualization. However, they may be constrained by personnel, time, or funds. In contrast, qualitative adaptations—addressing content or

837634 ISCXXX10.1177/1053451219837634Intervention in School and ClinicLindström et al. research-article2019

1Lehigh University, Bethlehem, PA, USA 2Peabody College of Vanderbilt University, Nashville, TN, USA

Corresponding Author: Esther R. Lindström, PhD, Lehigh University, College of Education, 111 Research Drive, A-319, Bethlehem, PA 18105-4794, USA. Email: [email protected]

Data-Based Individualization in Reading: Tips for Successful Implementation

Esther R. Lindström, PhD1 , Samantha A. Gesel, MEd2, and Christopher J. Lemons, PhD2

Abstract Students with severe and persistent academic or behavioral challenges may benefit from data-based individualization (DBI). Starting with an evidence-based standard protocol and systematic progress monitoring, teachers can evaluate growth and implement individualized interventions to meet students’ needs. Specifically, this article addresses the systematic use of student data to determine content and pacing for intensive reading instruction. Insights from implementing this approach with struggling first grade readers in Tier 3 of an RTI framework are provided. Evidence-based standard protocols, strategic data collection and management, and team collaboration are crucial elements for successful implementation.

Keywords intervention, learning disabilitieshttps://us.sagepub.com/en-us/journals-permissionshttps://isc.sagepub.commailto:[email protected]http://crossmark.crossref.org/dialog/?doi=10.1177%2F1053451219837634&domain=pdf&date_stamp=2019-04-24

114 Intervention in School and Clinic 55(2)

instructional focus—require greater expertise but may be less resource-intensive (Lemons et al., 2014). The teacher monitors progress during each adaptation period, noting specific strengths and weaknesses. If growth is sufficient, instruction and progress monitoring continue; if inadequate, further adaptations are made to intensify and individualize instruction. In this iterative process, all instructional deci- sions are tied to progress monitoring data and the predeter- mined goal, and tools may be adjusted to reflect students’ changing achievement and needs.

To maximize effectiveness, NCII recommends teachers implement DBI as a team, collaborating with other school

personnel. DBI teams work together to generate a compre- hensive understanding of student progress. Teams stream- line problem solving and enhance instruction by discussing data-based adaptations, sharing resources, and making deci- sions collaboratively. Team discussions invite multiple per- spectives on one student, and also create a structure to identify patterns across students based on shared character- istics and potential treatments.

The Project: Supporting Struggling Readers with DBI

We partnered with Edmundson Elementary to begin imple- mentation of DBI (see Note 1). Edmundson is a public school in an urban district in the southeastern United States, serving almost 600 students in grades K–4. Eager to meet the needs of early readers, Edmundson teachers identified struggling first graders with fall benchmark assessment scores substantially below grade level and in need of Tier 3 supports. The DBI team assessed students using additional measures of (a) phonological awareness (phoneme segmen- tation fluency; PSF), (b) decoding (nonsense word fluency; NWF), (c) letter sound fluency (LSF), and (d) oral reading fluency (ORF) from the DIBELS Benchmark Assessment (Good & Kaminski, 2011) to gain a more precise under- standing of student skills. Then, the team used these data and end of year benchmark guidelines to create homoge- neous reading pairs and set goals for students. Throughout the project student progress was monitored using weekly CBM measures. Of eight students identified for Tier 3 DBI, three cases are highlighted in the sections that follow.

Selecting a Validated Intervention Program

The DBI team of certified special education teachers pro- vided daily Tier 3 reading intervention to student pairs. The team used Road to Reading (RTR; Blachman & Tangel, 2008) as the primary evidence-based standard protocol intervention package. RTR incorporates fundamental com- ponents of effective reading instruction (National Reading Panel [NRP], 2000) and intensive intervention (Vaughn, Wanzek, Murray, & Roberts, 2012). Sessions (30–40 min- utes) targeted phonological awareness (PA), decoding, and encoding. Interventionists received initial training with RTR, practice, and ongoing support from research staff. During the intervention period, the team held weekly DBI discussions of student progress and instructional decisions. The DBI team meetings were centered on student data, incorporating findings from our measures with those of school personnel to get a detailed depiction of student strengths and weaknesses.

Although the first graders all experienced difficulties with reading, students’ specific abilities varied. Some had

Figure 1. The iterative nature of data-based individualization. Reprinted with permission from Interactive DBI Process, by National Center on Intensive Intervention, retrieved from http:// www.intensiveintervention.org. Copyright 2014 by American Institutes for Research.http://www.intensiveintervention.orghttp://www.intensiveintervention.org

Lindström et al. 115

challenges in PA, evidenced by PSF data and informal observations. Others struggled with higher order demands of reading connected text fluently. In both cases, students required individualization to support their progress toward the eventual goal of fluent reading. Monitoring CBM data and engaging in error analyses allowed the DBI team to determine potential impact of these qualitative adaptations on individual students’ achievement and systematically adjust as necessary.

Progress Monitoring

A challenge for practitioners is to choose measures that com- prehensively assess student skills and deficits, while also remaining sensitive to small changes in achievement. Two students’ data highlighted this challenge. Using guidelines from Fuchs et al. (2014) and Lemons et al. (2014), baseline scores were multiplied by 1.5 to set ambitious year-end goals for each student. Both students had one measure indi- cating inadequate growth, but implications differed based on student and measurement characteristics. One student, Kyle, showed growth in LSF and NWF, but had relatively stagnant PSF scores (see Figure 2). This measure identified a weak- ness in fundamental PA skills that otherwise would have been missed. The team adapted Kyle’s RTR sessions to tar- get this skill gap, including intensive practice in “Guess My Word” and “First Sound” PA activities from the K-PALS cur- riculum (Fuchs et al., 2001) and added letter supports. Kyle continued to show inadequate response for two iterations of adaptations. However, after removing letter supports and changing his intervention to individual sessions, Kyle’s PSF scores spiked immediately and continued to grow.

In contrast, Kyle’s classmate, Chase, improved in PSF and NWF, but showed minimal growth in ORF (see Figure 3). Even after adapting instruction to include fluency training via timed and repeated reading, Chase’s ORF data fell below his goal line. It is important to note that because ORF does not have benchmark goals until the middle of first grade, and the students were already below grade level, ORF may not have been sufficiently sensitive for use in determining instruc- tional adaptations. Targeting foundational skills was more appropriate.

Ongoing Intervention Adaptation

Students with pervasive and intensive needs often exhibit concomitant academic, behavioral, and motivational diffi- culties (Hinshaw, 1992; Kuchle, Edmunds, Danielson, Peterson, & Riley-Tillman, 2015), which may impact effec- tiveness of academic interventions (Nelson, Benner, & Gonzalez, 2003). This project addressed academic, behav- ior, and motivation challenges by adjusting the standard RTR protocol (e.g., subtracting 3 minutes from oral reading to spend on PA).

Phonics-based adaptations involved adding picture cues to letter cards and letter-sound sorting. The PA adaptations introduced activities focused on segmenting and blending sounds in words (e.g., h-a-t, haaat, hat). Together, these adaptations led to improvements in decoding for Monique, the most struggling reader (see Figure 4). On the other hand, fluency adaptations were implemented for Chase, who had mastered PA skills but struggled to read text fluently. In the standard RTR protocol, students read aloud decodable text aligned with the curriculum. In Chase’s adaptation, he spent more time on this step to build fluency using repeated read- ings (NRP, 2000; Samuels, 1979). Chase graphed his read- ing time during this activity, which increased motivation by incentivizing improvement.

In addition to adaptations regarding content, the team also adapted delivery to meet students’ varying behavior needs, including aggression (e.g., hitting the table, throw- ing instructional materials) and avoidance (e.g., hiding under the table, walking away). Two students used check- lists to monitor adherence to school expectations, such as having a safe body and doing one’s best work. Additional behavioral and motivational supports were introduced as needed. These supports included token economies, behavior- specific praise, earned breaks, or academic games, depend- ing on student preference and hypothesized functions of behavior.

Feasibility of Implementation

When thinking about starting DBI, questions of feasibility are inevitable: Can this really be done? Do I have the resources and the support I need to do this well? In short, the answer is yes. Here are some tips for implementing DBI successfully for the first time:

1. Start with a high-quality standard protocol. Your school may already have one available. This evi- dence-based curriculum will be at the core of your instruction, prior to implementing DBI. This pro- gram may be sufficient for some of your struggling students. Using the DBI framework, you will deter- mine responsiveness and adapt content and delivery based on inadequate student response. To evaluate instructional programs, refer to the NCII Tools Chart (intensiveintervention.org).

2. Establish ambitious goals. Use multiple data sources to get a clear understanding of student achievement and needs. One research-supported method to estimate end of year achievement is to multiply the baseline score by 1.5 (Fuchs et al., 2014). For a specific example of goal setting in DBI, see Lemons et al. (2014). Visit intensiveinterven- tion.org for videos and resources on appropriate goal setting.

116 Intervention in School and Clinic 55(2)

Figure 2. Comparing sensitivity of measures and responsiveness to adaptations.

Lindström et al. 117

3. Stay focused. Begin with one student in one area of instruction. This doesn’t have to be a schoolwide initiative; in fact, it’s a good idea to become familiar with DBI on a small scale, and then consider scaling up when it becomes more comfortable. Complete

the IRIS Center modules on DBI to practice the pro- cedures necessary for implementation (https://iris .peabody.vanderbilt.edu/module/dbi1/).

4. Get organized. Individualizing a curriculum for struggling students can involve many instructional

Figure 3. Considering sensitivity of measures.

Figure 4. Focusing adaptations on phonics skills to bolster letter-sound correspondence.https://iris.peabody.vanderbilt.edu/module/dbi1/https://iris.peabody.vanderbilt.edu/module/dbi1/

118 Intervention in School and Clinic 55(2)

materials and data components. These need to stay organized to be accessed and used effectively. DIBELS (https://dibels.uoregon.edu/; Good & Kaminski, 2011) and other progress monitoring tools offer online data management to help keep things in order. For a more detailed example of how a teacher would integrate the various components (e.g., assessment, graphing, intervention time), please see Lemons et al. (2014).

5. Be proactive. Within the dynamic ecosystem of a school, DBI is susceptible to sudden changes in enrollment, scheduling, and other potential obsta- cles. Plan ahead to minimize scheduling conflicts and provide critical time for DBI team meetings, progress monitoring, and intervention. Cloud-based calendars and file-sharing systems such as Google Classroom can help to keep all stakeholders informed and involved.

6. Collaborate. Share successes and challenges with your DBI team. Setting aside time to specifically discuss—and listen to—ways to meet individual student needs will pay off in spades. Other profes- sionals may introduce ideas or resources that you have not previously tried, and vice versa. Because these conversations are based on student data, you can have more confidence in evaluating the effec- tiveness of instruction and, when necessary, select- ing appropriate adaptations for your student(s). NCII offers free tools for group data meetings at https://intensiveintervention.org/tools-support -intensive-intervention-data-meetings.

7. Keep at it. DBI is intended for students who have not responded adequately to previous instruction. Their gaps in fundamental skills tend to contribute to chal- lenges in multiple domains. For these reasons, progress can be slow. You may need to try a few different adapta- tions to find the right instructional plan for your student. Read examples of adaptations and lessons learned by other teachers at https://intensive intervention.org/ content/implementation-examples-field.

Final Thoughts

Data-based individualization is one approach to intensive intervention for students with persistent difficulties. Much of the benefit of DBI comes from the iterative, problem-solving process itself. Continuously monitoring student data creates a system in which you can make informed adaptations and assess the effect of those changes on student outcomes. Readers may learn more about DBI by completing modules available through the IRIS Center at Vanderbilt University (https://iris.peabody. vanderbilt.edu/module/dbi1/).

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research described in this article was supported in part by Grant H325H140001 to Vanderbilt University from the Office of Special Education Programs within the U.S. Department of Education. Nothing in the article necessarily reflects the positions or policies of the funding agency and no official endorsement by them should be inferred.

Note

1. This article describes an authentic situation observed by the authors. The names of students and schools have been replaced with pseudonyms.

ORCID iD

Esther R. Lindström https://orcid.org/0000-0001-6343-2538

References

Blachman, B. A., & Tangel, D. M. (2008). Road to reading: A program for preventing and remediating reading difficulties. Baltimore, MD: Brookes.

Danielson, L., & Rosenquist, C. (2014). Introduction to the TEC special issue on data-based individualization. Teaching Exceptional Children, 46(4), 6–12.

Deno, S. L. (1985). Curriculum-based measurement: The emerg- ing alternative. Exceptional Children, 52(3), 219–232.

Fuchs, D., Fuchs, L. S., Al Otaiba, S., Thompson, A., Yen, L., McMaster, K. N., Svenson, E., & Yang, N. J. (2001). K-PALS: Helping kindergartners with reading readiness: Teachers and researchers in partnerships. Teaching Exceptional Children, 33(4), 76–80.

Fuchs, D., Fuchs, L. S., & Vaughn, S. (2014). What is intensive instruction and why is it important?. Teaching Exceptional Children, 46(4), 13–18.

Good, R. H., & Kaminski, R. (2011). DIBELS next assessment manual. Eugene, OR: Dynamic Measurement Group.

Hinshaw, S. P. (1992). Externalizing behavior problems and academic underachievement in childhood and adoles- cence: Causal relationships. Psychological Bulletin, 111, 127–155.

Kuchle, L. B., Edmunds, R. Z., Danielson, L. C., Peterson, A., Riley-Tillman, T. C. (2015). The next big idea: A framework for integrated academic and behavioral intensive intervention. Learning Disabilities Research & Practice, 30(4), 150–158. doi:10.1111/ldrp.12084

Lemons, C. J., Kearns, D. M., & Davidson, K. A. (2014). Data- based individualization in reading: Intensifying interventions for students with significant reading disabilities. Teaching Exceptional Children, 46(4), 20–29. doi:10.1177/00400599 14522978https://dibels.uoregon.edu/https://intensiveintervention.org/tools-support-intensive-intervention-data-meetingshttps://intensiveintervention.org/tools-support-intensive-intervention-data-meetingshttps://intensiveintervention.org/content/implementation-examples-fieldhttps://intensiveintervention.org/content/implementation-examples-fieldhttps://iris.peabody.vanderbilt.edu/module/dbi1/https://iris.peabody.vanderbilt.edu/module/dbi1/https://orcid.org/0000-0001-6343-2538

Lindström et al. 119

National Center on Intensive Intervention. (2012). Interactive DBI process. Retrieved from http://www.intensiveinterven tion.org. Washington, DC: Office of Special Education, U.S. Department of Education.

National Reading Panel. (2000). Report of the National Reading Panel. Teaching children to read: An evidence- based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Pub. No. 00-4769). Washington, DC: Government Printing Office.

Nelson, J. R., Benner, G. J., & Gonzalez, J. (2003). Learner char- acteristics that influence the treatment effectiveness of early literacy interventions: A meta-analytic review. Learning Disabilities Research and Practice, 18(4), 255–267.

Samuels, S. J. (1979). The method of repeated readings. Reading Teacher, 32, 403–408.

Vaughn, S., Wanzek, J., Murray, C. S., & Roberts, G. (2012). Intensive interventions for students struggling in reading and mathematics: A practice guide. Portsmouth, NH: RMC Research Corporation, Center on Instruction.http://www.intensiveintervention.orghttp://www.intensiveintervention.org

Data-Based Individualization

Using Data-Based Individualization to Intensify Mathematics Intervention for Students With Disabilities Sarah R. Powell and Pamela M. Stecker

Riverview Middle School serves students in Grades 5 through 8. Molly, a sixth grader at the school, receives special education services on the basis of a specific Learning disability in mathematics. Molly’s special education teacher, Mr. Drummond. works with Molly five days a week, but Molly is demonstrating minimal progress with the current intervention. Mr. Drummond is especially concerned about Molly’s understanding of fractions because of the emphasis on fractions with new standards and standardized assessments at the school. Mr. Drummond does not want to ignore all of the intervention work already conducted with Molly, so he decides to use principles for intensifying instruction as well as Molly’s progress monitoring data to try to better meet her individual needs and to inform his decision making about whether these intervention changes are working as desired for Molly.

Data-based individualization (DBI) is a continuous process connecting assessment and intervention (see Fuchs, Fuchs, & Vaughn, this issue). DBI provides teachers like Mr. Drummond with an evidence-based method for

individualizing interventions for students who do not demonstrate adequate response. Assessment data gathered through flie use of progress monitoring help teachers analyze and determine ways to modify components of intervention. After teachers make instructional modiflcations, they continue to use progress monitoring data to determine whether these adaptations are acceptable or whether subsequent adaptations must be made. Consequently, DBI is an iterative process that involves (a) adapting instruction using principles of intensive intervention and evidence-based practices and (b) implementing these adaptations consistently and regularly. Progress monitoring probes used to evaluate the success of each intervention should be technically sound and conducted at least weekly. At periodic intervals (depending upon the frequency of data collection and the length of the intervention), a progress check helps teachers compare individual student progress to goal expectations. In addition, teachers

may use informal diagnostic assessment, such as analyzing student responses on the probes and following up with an interview while the student works selected problems, to gain insight about student strengths and weaknesses. This process is repeated, as the teacher continues to build a more effective instructional program for a particular student across time.

The National Center on Intensive Intervention (NCII; www.intensiveinter vention.org) has identifled the

TEACHING EXCEPTIONAL CHILDREN | MARCH/APRIL 2014 31

principles of and processes for intensive intervention (Fuchs et al., 2008: Vaughn, Wanzek, Murray, & Roberts, 2012):

• smaller steps, • precise language, • repeat language, • student explains, • modeling, • manipulatives, • worked examples, • repeated practice, • error correction, • fading support, • fluency, and • move on.

In this article, we illustrate how Mr. Drummond uses DBI to strengthen Molly’s skills by intensifying her instruction within the multitiered system of support in place at Riverview Middle School. Mr. Drummond’s approach to applying the principles of intensive intervention also illustrates how tracking student progress informs instructional decisions about the effectiveness of intervention adaptations.

Implementing DBI

When Molly started at Riverview Middle at the beginning of fifth grade, her general education teacher, recognizing that Molly was struggling, paid closer attention to Molly’s work and tried to provide additional assistance to Molly and two of her classmates who also were performing below standards. Molly’s teacher monitored their mathematics performance using computation progress monitoring probes. Figure 1 provides sample items from a fifth- grade probe that reflects the computational skills outlined by the Common Core State Standards (CCSS; National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010) and matches the curriculum used in Molly’s school.

At this stage (Tier 1), Molly’s teacher was following the system of support used at Riverview Middle School: weekly progress monitoring

of students who did not meet the beginning benchmark for fifth grade. After 8 weeks of monitoring and instruction, Molly continued to demonstrate inadequate growth and an inadequate end level, so she was included in small-group tutoring in mathematics (Tier 2).

At Tier 2, Molly also participated in 18 weeks of a standardized, small- group tutoring program. Riverview Middle uses this intervention platform for all fifih-grade students targeted as needing additional assistance in mathematics. The emphasis of this supplementary program is building fluency with multiplication and division facts and computational algorithms. Students also focus on fraction and decimal concepts. Molly’s Tier 2 tutor administered computation progress monitoring probes on a

Assessment data gathered through the use of progress monitoring help teachers analyze and determine ways to modify components of

intervention.

weekly basis. Despite 18 weeks of intensive tutoring, Molly still failed to demonstrate an adequate slope or an adequate end-level performance, so she was referred for a special education evaluation. (Although many schools complete three or four tiers of intervention before special education referral, Riverview Middle uses a three-tier model with the third tier as special education.) During the evaluation process, other factors, such as an intellectual disability and emotional/behavioral disorder, were ruled out as contributing to Molly’s difficulty with mathematics. In addition to administering a standardized, norm-referenced assessment in mathematics, the progress monitoring data collected during the first two tiers helped to corroborate Molly’s learning disability in the area of mathematics, and the school developed an

individualized education program (IEP) for Molly in the area of mathematics. The school year, however, was at its end by this point in time, so IEP implementation began at the beginning of Molly’s sixth-grade year.

Planning

Setting Molly’s goal. Following the summer break, Molly entered sixth grade, and Mr. Drummond began to implement interventions to respond to Molly’s mathematics difficulties. He reviewed the work conducted for Molly’s IEP development and consulted the NCII website. Because Molly’s performance had been so weak during the fifih grade, the IEP team decided that Molly’s computational goal should focus on fifth-grade content and standards and recommended that fifth-grade progress monitoring probes continue to he used to track Molly’s progress toward meeting her annual goal. Molly’s present level of performance in computation on fifth- grade probes was included in the IEP: Given 25 computational problems sampled randomly from the fifth-grade curriculum, Molly currently writes 12 digits correct in answers in 6 minutes.

Afier consulting normative data about typical growth on the probes, the team decided that a growth rate of 0.9 digits correct per week would be an appropriately ambitious rate of improvement for Molly to help close in the widening gap between her performance and that of her peers. With 36 weeks to the end of the school year, the team multiplied the growth rate of 0.9 digits per week by 36 weeks to yield an improvement of approximately 32 digits; Molly’s IEP computational goal read: Given 25 computational problems sampled randomly from the fifth-grade curriculum, Molly will write 44 digits correct in answers in 6 minutes by the end of the school year.

Diagnosing Molly’s performance. Based on his work with Molly and her performance on computation progress monitoring probes, Mr. Drummond knows that Molly does well with whole-number computation. After

32 COUNCIL FOR EXCEPTIONAL CHILDREN

Figure 1 . Sample Fifth-Grade Computation Progress Monitoring Probe

9.14 -1- 33.26

4

‘̂ 4 5

.08 )9.44

4.6 )37.72

3 4

3.3 + 6.14

4.35 X .96

^x7 = 4

5 + 3 = 6 4

4598 -1- 7006

82.04 – 7.99

7 7

36 )7488

10605 – 929

9 3

6 8

7 _ 2 = 8 5

560 X 27

5 6

6 x 2 = 7 5

12.77 – .96

.56 X 7

8 – 2 = 9 9

consulting the CCSS and reading about the importance of understanding fractions for later mathematics competence (Bailey, Hoard, Nugent, & Geary, 2012), Mr. Drummood focuses on Molly’s skill with fractions. Examining Molly’s performance on the fifth-grade probes, he had noticed that, although Molly got some digits correct in the answers, she missed parts of all 13 fractions items represented on the measure as well as six of eight problems involving decimals.

Establishing instructional objectives. Mr. Drummond refers to the CCSS, the progress monitoring probes, and Molly’s performance to determine the critical content for instruction. He groups Molly’s instructional objectives as they related to (a) adding and subtracting fractions with like and unlike denominators, [b) multiplying fractions by fractions and whole numbers, (c) dividing fractions by whole numbers and dividing whole numbers by fractions, and [d) performing all four operations with decimals.

Making an assessment plan. Because Molly’s IEP requires regular progress monitoring using probes at the flfth-grade level, Mr. Drummond administers probes for 6 minutes on the last day of school each week. He grades Molly’s performance while Molly watches, and then helps Molly graph her own score. Mr. Drummond also decides to periodically check Molly’s progress [every 3-4 weeks) on sixth-grade probes to see how she performs on grade-level content and to check whether Molly is able to generalize her new skills to more difflcult problems. He decides to implement intensive intervention for 9 weeks before determining whether the intervention is helping Molly to stay on track toward meeting her IEP goal.

Preparing to implement DBI and making instructional adaptations. After consulting resources on the NCII website, Mr. Drummond chooses three principles to incorporate when adapting instruction during the first phase of Molly’s intervention to make it more intensive [see Eigure 2) :

• Smaller steps: Mr. Drummond plans to use a task analysis to break specific fraction problems into smaller steps.

• Modeling: Mr. Drummond reads more about modeling from the teacher’s perspective and how to engage the student in active involvement.

• Manipulatives: A conference with one of the mathematics teachers at Riverview Middle helps Mr. Drummond to learn more about appropriate manipulatives for teaching fractions.

Intensive Intervention A

Considering Molly’s weak performance and the importance of understanding and using fractions for future mathematics success, Mr. Drummond determines that the focus of Molly’s intervention adaptation should include addition, subtraction, multiplication, and division of fractions. Considering sequence of skills, Mr. Drummond decides to focus on comparing fractions before working on addition and subtraction of fractions. He also thinks it is best to work on addition and subtraction before moving onto multiplication and division.

DBI. Using the principle of smaller steps, Mr. Drummond teaches Molly to

Figure 2 . Molly’s Intensive Intervention

iv e

it io

In t

In te

assm

L sas

Smaller steps

Modeling

Manipulatives

nt ei

er v

IMPLEMEHTED PHINCIPUES

(all from A) J

Repeated practice |

Error correction |

Precise language |

V .S

l»PLE»iNri!DPI>l»aplES

(all from A and B)

‘S g | | ¡ Worked examples

nt e

er v <

Student expia ns

Fading support

TEACHING EXCEPTIONAL CHILDREN | MARCH/APRIL 2014 33

flrst check to see if the denominators are the same when comparing fractions. If denominators are alike, Molly can compare the numerators to determine which fraction is greater. If the denominators are not the same, Mr, Drummond teaches Molly how to flnd the least common denominator (LCD), Mr, Drummond breaks this process into even smaller steps so Molly knows what to do. Learning how to calculate the LCD of two fractions will help Molly with addition and subtraction of fractions, so Mr, Drummond believes this time is well spent.

The second principle Mr, Drummond incorporates in Molly’s intensive intervention is modeling, Mr, Drummond exphcitly models how to flnd the LCD by saying:

First, let’s find the multiples of 4. Whaf s 4 times 1? (4) Whaf s 4 times 21 (8) Whaf s 4 times 3? (12) Whaf s 4 times 4? (16). Multiples are the answer to multiplication problems with the same number. So, to find the multiples, think of the number times 1, times 2, times 3, times 4, and keep going. What are the multiples of 4? Lefs say them together: 4, 8, 12, 16, 20, 24, 28, 32, 36, 40. Thaf s enough multiples for now!

The third principle Mr. Drummond incorporates in Molly’s instruction is using manipulatives, Mr. Drummond usually uses fraction circles to help students understand fractions. Because Molly has not been performing well with fractions, Mr, Drummond decides to use other fraction manipulatives, such as fraction bars and two-color counters (see Figure 3), Fraction bars allow Molly to identify

Figure 3 . Comparing Fractions With Fraction Bars and Two-Color Counters

Figure 4 . Molly’s Computation Scores, Intensive Intervention Phase A

Compare 3/5 to 1/2 with fraction bars. Which is greater?

Compare 3/5 to 1 /2 with two-color counters. Which Is greater?

9 11 13 15 17 19 21 23 25 27 29 31 33 35 Weeks of Instruction

Note: The blue diamonds are Molly’s scores. The black line over the blue diamonds is Molly’s trend line. The green dot is Molly’s end-of-year goal. The green line is Molly’s goal line.

which of two fractions is longer (i,e., the greater fraction), and fhe two-color counters help her identify the LCD, Molly can change the denominator easily with the two-color counters by adding or subtracting counters.

Progress check. After 9 weeks of instruction, Mr. Drummond evaluates Molly’s progress monitoring graph to determine whether the intervention has kept Molly on track toward her IEP goal (see Figure 4), Although

last three progress monitoring probes Mr. Drummond reahzes that the only fraction problems Molly answers correctly are addition or subtraction with like denominators.

Intensive Intervention B. Mr, Drummond decides to continue instruction in fractions with unlike denominators using the principles of intensive intervention from Intensive Intervention Phase A (i.e., smaller steps, modeling, manipulatives).

Although Molly’s performance is improving very gradually … Mr. Drummond recognizes that,

at this slow rate of improvement, Molly is not likely to meet her year-end goal and that it is time

to make another intervention adaptation.

Molly’s performance is improving very gradually, the trend of her current progress is much less steep than her goal line, Mr. Drummond recognizes that, at this slow rate of improvement, Molly is not likely to meet her year-end goal and that it is time to make another intervention adaptation to try to increase Molly’s rate of improvement. Although Molly is performing fairly well on whole- number computation problems, hy conducting an error analysis of Molly’s

while incorporating three additional principles to further intensify instruction: providing repeated practice, correcting errors, and using precise language.

DBI. With repeated practice, Mr, Drummond provides multiple addition and subtraction problems for Molly to practice on a daily basis. He switches between addition and subtraction, so Molly learns to be aware of the operator signs (i.e., -i-, -) and

34 COUNCIL FOR EXCEPTIONAL CHILDREN

Figure 5. iVIelly’s Computation Scores, Intensive intervention Piiase B

13 15 17 19 21 23 Weeks of Instruction

25 27 29 31 33 35

Note; The blue diamonds are Molly’s scores. The black vertical line denotes the shift from Phase A to Phase B. The black hne over the blue diamonds is Molly’s trend line during Phase B. The green dot is Molly’s end-of-year goal. The green hne is Molly’s goal line.

what the signs mean. By incorporating both operator signs, Molly also practices discriminating among problem types.

In an effort to improve Molly’s self-esteem about mathematics, Mr. Drummond had shied away from correcting all of Molly’s errors during Phase A. He realizes now that he may have been doing her a disservice, as one of the guiding principles of intensive intervention is to correct errors. In Intensive Intervention Phase B, when Molly makes an error, Mr. Drummond says, “Let’s look at this part again.” Then, he and Molly work through the mistakes with Mr. Drummond asking questions to guide Molly’s thinking. Sometimes Mr. Drummond models how to do a part of a problem (e.g., finding the LCD, changing a mixed fraction into an improper fraction), and then asks Molly to solve a similar problem.

Mr. Drummond generates a hst of important vocabulary words related to fractions—numerator, denominator, LCD, multiple, like, and unlike—and then takes time to provide Molly with precise definitions of each term (e.g.. The numerator is the number of parts of the whole). Mr. Drummond also reviews the way he explains the steps of adding and subtracting fractions, and tries to make his teaching steps more direct and his language precise.

Progress check. Mr. Drummond continues to monitor Molly’s progress using fifth-grade computation probes. After the intervention’s 9-week second phase, the number of Molly’s correct digits and the overall trend of her improvement have increased (see Figure 5). Mr. Drummond is pleased and recognizes that intensifying instruction seems to be helping Molly. Because the trend of Molly’s current progress, however, is still not as steep as it needs to be for Molly to reach her year-end goal, Mr. Drummond decides to implement another round of intervention adaptations. He conducts an error analysis on Molly’s last three computation probes to better understand where she is struggling, and realizes that although Molly is solving most addition and subtraction fraction problems correctly, she is not able to correctly answer any multiplication and division problems.

Intensive Intervention C

Molly’s data have convinced Mr. Drummond that using the principles of intensive intervention has helped Molly be more successful with her mathematics. In addition, Mr. Drummond thinks that incorporating these principles has made him more conscious of the details of instructional design and of

his own decision making during instructional delivery; his instructional skills are improving along with Molly’s performance. Because Molly still is not proficient with the fraction problems on the probes, Mr. Drummond decides to incorporate three more principles for intensive intervention: using worked examples, asking the student to explain, and fading support.

DBI. Mr. Drummond begins Intensive Intervention Phase C by continuing to reinforce addition and subtraction of fractions, but he also introduces muhiplication of fractions. With multiplication, Mr. Drummond starts off using worked examples. With a worked example, the answer is already provided; the idea is to engage the student in a conversation about how the work was completed. Worked examples help Molly understand procedures and concepts as she works on developing knowledge of the steps involved in solving a specific type of problem. Molly often refers back to the worked example while she is working new problems to compare her steps against those shown in the worked example.

As Molly gains proficiency, Mr. Drummond removes

the worked examples and Molly does more problems on her own.

To reinforce Molly’s understanding of addition and subtraction with like and unlike denominators, Mr. Drummond asks Molly to explain or to instruct him on how to solve specific problems. By getting Molly to talk about her fractions work, Mr. Drummond is able to check for (a) understanding of concepts,

(b) correct use of vocabulary, and (c) understanding of procedures. Molly is a little hesitant at the beginning to talk aloud about her mathematics work, so Mr. Drummond first models how he would talk about solving a problem, and then Molly gives it a try. Mr. Drummond also uses this strategy when asking Molly to make corrections; having Molly explain to him how she

TEACHING EXCEPTIONAL CHILDREN | MARCH/APRIL 2014 35

Figure 6 . Molly’s Computation Scores, Intensive Intervention C

13 15 17 19 21 23 Weeks of instruction

25 27 29 31 33 35

Note: The blue diamonds are Molly’s scores. The black vertical line denotes the shifts between phases. The black line over the blue diamonds Is Molly’s trend line during Phase B. The green dot is Molly’s end-of-year goal. The green line is Molly’s goal line.

Figure 7 . Principles of Intensive intervention and DBI

Principles

?

en si

In t

c

ti o

‘v en A

In te

l

Implen^nttd Principles

PROGRISSCHECIC.

Smaller Steps

Precise language

JRepeat language

Student explains

[Modeling

>

«

o

>UJ V1

Implemented Principles

(all/some from A)

1 [Manipulatives

PROGRESS CHECK…

Worked examples

I Repeated practice^

I Error correction

[Fading support

¡Fluency

[ Move on

LS

U te

n si

l i o r

e rv

e n

t

u Impleinented Principles

(all/some from A, B)

PROGRESS CHECK…

n te

n si

ve

o

1Û Implemented Principles

(all/some from A, B,C)|

PROGRESS CHECK…

Note: Visit the National Center on Intensive Intervention website (www.intensiveintervention.org) for information and resources on academic and behavioral progress monitoring, enhancing instruction, and implementing intensive intervention.

worked the missed problem helps her to identify where she made mistakes.

Finally, Mr. Drummond realizes Molly needs to be able to complete mathematics problems on her own in her general education classroom or at home: He decides to begin fading support during instruction. Mr. Drummond decides to do less modeling and give Molly more independent practice on items where she is showing accuracy. When Molly appears to require support on a particular problem type, Mr. Drummond provides prompts while asking her to talk through the steps. As Molly gains proflciency, Mr. Drummond removes the worked examples and Molly does more problems on her own. In this way, Mr. Drummond fades support based on Molly’s performance on speciflc types of problems. Mr. Drummond continues to incorporate the principles for intensifying intervention when he introduces new topics (e.g., introducflon to division of fractions), but he fades support systematically in areas where Molly is doing well. In this way, Molly is able to develop independence in mathematical problem solving.

Progress check. After 9 weeks of implementing these additional principles for intensifying instruction, Mr. Drummond reviews Molly’s progress (see Figure 6). Molly has acquired so much skill that the trend of her progress is now steeper than the goal line. Although Molly has not yet reached her actual goal, she is on track toward meeflng (and actually exceeding) her goal by the end of the year. Mr. Drummond will meet with the IEP team to discuss the possibility of raising Molly’s goal; trying to accelerate Molly’s progress even more will help to narrow the gap between Molly’s performance and that of her peers. Because Molly has not met her goal yet for fifth-grade problems and because he has seen only slight improvement on his monthly checks on sixth-grade probes, Mr. Drummond thinks It is premature to change the goal and progress monitoring to reflect sixth-grade standards. He anticipates, however, that Molly will be able to transition to more difflcult material

36 COUNCIL FOR EXCEPTIONAL CHILDREN

during the following year and may show even faster rates of improvement in the future, especially if he continues to use principles of DBI to monitor Molly’s growth and to adapt instructional interventions to better meet her needs.

Conclusion

Many schools are using multitiered systems of support for students with mathematics disabilities (Gersten et a l , 2009), so it is necessary for teachers to understand how to use DBI to create instructional programs that benefit individual students. As highlighted in this special issue of TEAGHING Exceptional Ghildren, DBI is an iterative process that teachers can use to create intervention adaptations and to determine whether these adaptations are appropriate or need to be changed. Many students like Molly are expected not only to meet IEP goals but also to perform well on standardized assessments aligned with the CCSS; teachers therefore need to be able to individualize instruction to help students meet or exceed individual and district goals (Jitendra, 2012; Powell, Fuchs, & Fuchs, 2013). Figure 7 provides a structure to support educators like Mr. Drummond in identifying a process for intensifying interventions, to more closely respond to students’ instructional needs.

References

Bailey, D. H., Hoard, M. K., Nugent, L., & Geary, D. C. (2012). Competence with fractions predicts gains in mathematics achievement. Journal of Experimental Child Psychology, 113,

447-455. http://dx.doi.Org/10.1016/j/ jeep.2012.06.004

Fuchs, L. S., Fuchs, D., Powell, S. R., Seethaler, P. M., Cirino, P. S., & Fletcher, J. M. (2008). Intensive intervention for students with mathematics disabilities: Seven principles of effective practice. Learning Disability Quarterly, 31, 79-92.

Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., & Witzel, B. (2009). Assisting students struggling with mathematics: Response

to intervention (Rtl) for elementary

and middle schools (NCEE 2009-4060). Washington, DC: Institute of Education Sciences, U.S. Department of Education.

Jitendra, A. K. (2012). Understanding and accessing standards-based mathematics for students with mathematics difficulties. Learning Disability Quarterly, 36, 5-8. http://dx.doi .org/10.1177/0731948712455337

National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common core state standards for mathematics.

Washington, DC: Author.

Powell, S. R., Fuchs, L. S., & Fuchs, D. (2013). Reaching the mountaintop: Addressing the Common Core State Standards in mathematics for students with mathematics difficulties. Learning Disabilities Research and Practice, 28,

38-48. http://dx.doi.org/10.1111/ ldrp.12001

Vaughn, S., Wanzek, J., Murray, C. S., & Roberts, G. (2012). Intensive interventions for students straggling in

reading and mathematics: A practice

guide. Portsmouth, NH: RMC Research Corporation, Center on Instruction.

Sarah R. Powell (Texas CEC), Assistant Professor of Special Education,

University of Texas at Austin, Austin, Texas;

Pamela M. Stecker (South Carolina CEC), Professor of Special Education, Clemson

University, South Carolina.

Address correspondence regarding this

article to Sarah R. Powell, 1 University

Station, D5300, Austin, TX 78712

(e-mail: srpowell®austin,utexas.edu).

This work was supported in part by the

National Center on Intensive Intervention

(Grant No. H326Q110005), which was

awarded to the American Institutes for

Research by the Office of Special Education

Programs, U.S. Department of Education

(Celia Rosenquist, QSEP Project Officer),

The views expressed herein do not

necessarily represent the positions or

polices of the U.S. Department of

Education. No official endorsement by the

U,S, Department of Education of any

product, commodity, service, or enterprise

mentioned in this publication is intended

or should be inferred.

TEACHING Exceptional Children, Vol. 46, No, 4,pp, 31-37,

Copyright 2014 The Author(s),

Teachers change lives. Ĵife’U change

yours. Meredith Colleges coeducational Master of Education and Master of Arts in Teaching programs develop highly qualified special education teachers—and our strong reputation in teacher education is recognized nationally.

Learn more today at

meredith.edu/graduate/education

MEREDITH ^ – C o L L E G E

Graduate Programs

Meredith’s John E. Weems Graduate School admits qualified students without regard to race, creed, gender, sexual orientation, age or disahility. ¡3-074

GET YOUR EDITION OF THE NASCO SPECIAL EDUCATION CATALOG. • Sensory Products • Assistive Technology

• Language, Communication, & Reading

• Basic Math, Money & Time • Life Skills

• Arts & Cratts • Resources

•Therapy Supplies • And More!

1-80Û-558-9595 Shep ealine

S3

TEACHING EXCEPTIONAL CHILDREN | MARCH/APRIL 2014 37

Copyright of Teaching Exceptional Children is the property of Council for Exceptional Children and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.

Order your essay today and save 10% with the discount code ESSAYHELP