184 THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003/PP. 184–192
Developments in Curriculum-Based Measurement
Stanley L. Deno, University of Minnesota
Curriculum-based measurement (CBM) is an approach for assessing the growth of students in basic
skills that originated uniquely in special education. A substantial research literature has developed to
demonstrate that CBM can be used effectively to gather student performance data to support a wide
range of educational decisions. Those decisions include screening to identify, evaluating prereferral interventions, determining eligibility for and placement in remedial and special education programs, formatively evaluating instruction, and evaluating reintegration and inclusion of students in mainstream
programs. Beyond those fundamental uses of CBM, recent research has been conducted on using CBM
to predict success in high-stakes assessment, to measure growth in content areas in secondary school
programs, and to assess growth in early childhood programs. In this article, best practices in CBM are
described and empirical support for those practices is identified. Illustrations of the successful uses of
CBM to improve educational decision making are provided.
The special characteristics of learners with disabilities have
long driven the development of alternative specialized methods for assessing those needs. Perhaps the classic example of
this phenomenon is the work of Alfred Binet, who as minister of public instruction in France, worked with Theodore
Simon to explore the possibility of using different structured
tasks to differentially diagnose and prescribe educational programs for students who might not profit from regular classroom instruction. Although Binet’s work subsequently was
subverted by other efforts to scale intelligence, it is important
to remember that Binet’s purpose was to identify more effective programs for educating students rather than excluding
them. The innovation in assessment presented in this article,
curriculum-based measurement (CBM; Deno, 1985), is also
intended to improve educational programs.
Background
CBM was developed to test the effectiveness of a special education intervention model called data-based program modification (DBPM; Deno & Mirkin, 1977). That model was based
on the idea that special education teachers could use repeated
measurement data to formatively evaluate their instruction
and improve their effectiveness. To empirically test teacher
use of DBPM, a research and development program was conducted for 6 years through the federally funded University of
Minnesota Institute for Research on Learning Disabilities
(IRLD).
One result of the IRLD formative evaluation research
was the development of a generic set of progress monitoring
procedures in reading, spelling, and written expression. Those
procedures include specification of (a) the core outcome tasks
on which performance should be measured; (b) the stimulus
items, the measurement activities, and the scoring performance
to produce technically adequate data; and (c) the decision
rules used to improve educational programs. Ultimately, a set
of criteria was specified that was used to establish the technical adequacy of the measures, the treatment validity or
utility of the measures, and the logistical feasibility of the
measures (Deno & Fuchs, 1987). Since then, CBM data have
been used across a wide range of assessment activities, including screening, prereferral evaluation, placement in remedial and special education programs, formative evaluation, and
evaluation of reintegration and inclusion. Recently, research
has explored the use of CBM data to predict success on highstakes assessment and to measure growth in content areas in
secondary school programs and in early childhood special education. The remainder of this article addresses the successful uses of CBM to accomplish these purposes.
CBM Characteristics
When the generic procedures for measurement are employed
with stimulus materials drawn directly from the instructional
materials used by teachers in their classrooms, the approach
is referred to as curriculum-based. Because evidence has shown
that the same procedures can be used successfully with stimulus materials drawn from other sources, the generic procedures
have been referred to as general outcomes measures (GOMs;
L. S. Fuchs & Deno, 1994) or dynamic indicators of basic skills
(DIBS; Shinn, 1998). In contrast to the term curriculum-based
assessment, which has been used to refer to a wide range of
Address: Stanley L. Deno, University of Minnesota, Burton Hall, 178 Pillsbury Dr. S.E., Minneapolis, MN 55455; e-mail: denox001@umn.edu
THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
informal assessment procedures, curriculum-based measurement refers to a specific set of standard procedures that include the following characteristics.
Technically Adequate
The reliability and validity of CBM have been achieved through
using standardized observational procedures for repeatedly
sampling performance on core reading, writing, and arithmetic
skills. Unlike most informal measures, the psychometric concepts of reliability and validity are primary characteristics of
CBM (Good & Jefferson, 1998; Shinn, 1989).
Standard Measurement Tasks
(“What to Measure”)
185
Multiple Equivalent Samples
One of the most distinctive and important features of CBM is
that performance is repeatedly sampled across time. The repeated observations of performance are structured so that students respond to different but equivalent stimulus materials
that are drawn from the same general source. For example, on
the first occasion in measuring reading proficiency, students
are asked to read aloud for 1 minute from a text passage that
they have not previously read. On the next occasion, the students read again from the same book, but from a different, unfamiliar, and equally difficult text passage. In this way, task
difficulty is held constant and inferences can be drawn regarding the generalizability of student proficiency at reading
comparable, but unfamiliar, text.
The standard tasks identified for use in CBM include reading
aloud from text and selecting words deleted from text (maze)
in reading, writing word sequences when given a story starter
or picture in writing, writing letter sequences from dictation
in spelling, and writing correct answers/digits in solving problems in arithmetic.
Time Efficient
Prescriptive Stimulus Materials
Easy to Teach
Because the materials used for assessment in CBM may be obtained from the instructional materials used by the local school,
specifications are provided for materials selection (e.g., Shinn,
1989). Key factors in this selection process are the representativeness and equivalence of the stimulus materials. Both factors are addressed to increase the utility of the procedures for
making instructional decisions.
Another logistical consideration in using CBM is the ease
with which professionals, paraprofessionals, and parents can
learn to use the procedures in such a way that the data are reliable.
Administration and Scoring
(“How to Measure”)
The original purpose of CBM was to enable teachers to formatively evaluate their instruction. What follows is a summary,
beginning with the more common and older applications of
CBM and progressing to recent applications.
CBM procedures include specification of sample duration, administration, student directions, and scoring procedures. Combining the prescriptive selection of stimulus materials with
standardization of the procedures is necessary to ensure sufficient reliability and utility of the data for individual and
group comparisons across time. Standardization also enables
summarization of group data for developing local norms and
for general descriptions of program effects across students
(Shinn, 1995).
Performance Sampling
In CBM, academic performance is sampled through the use
of direct observation procedures. All CBM scores are obtained
by counting the number of correct and incorrect responses
made in a fixed time period. In reading, for example, the most
commonly used measure requires a student to read aloud from
a text for 1 minute and have an observer count the number of
correctly and incorrectly pronounced words.
CBM is designed for efficiency. Multiple performance sampling requires that measures be short. CBM performance samples are 1 to 3 minutes in duration, depending on the skill
being measured and the number of samples necessary to maximize reliability.
Common Uses
Improving Individual Instructional
Programs
The formative evaluation model based on CBM is represented
graphically in Figure 1. As can be seen in the figure, individual
student performance during an initial baseline phase is plotted and a goal is established. A progress line connecting the
initial level and the goal depicts the rate of improvement necessary for the student to achieve the goal. The vertical lines
on the graph indicate the point at which a change is made in
the student’s program. At each point, judgments are made regarding the effectiveness of the instruction being provided.
This systematic approach to setting goals, monitoring growth,
changing programs, and evaluating the effects of changes is
the formative evaluation model. Research on the achievement
effects of using this approach has revealed that the students
of teachers who use systematic formative evaluation based
FIGURE 1. CBM progress graph.
186
THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
on CBM have greater achievement rates (L. S. Fuchs, Deno,
& Mirkin, 1984).
Predicting Performance on
Important Criteria
Teachers’ effective use of formative evaluation to increase
achievement requires that CBM data be closely associated
with a wide range of criteria important to making educational
decisions (Good & Jefferson, 1998; Marston, 1989). All of
the measures used in CBM possess relatively high-criterion
validity coefficients (L. S. Fuchs, Fuchs, & Maxwell, 1988;
Marston, 1989). For that reason, CBM data can be used not
only to evaluate instruction but also to classify age and grade
developmental status (Deno, 1985; Shinn, 2002), predict and
improve on teacher judgments regarding student proficiency
(Marston, Mirkin, & Deno, 1984), discriminate between students achieving typically and those in compensatory programs
(Marston, & Magnusson, 1988), and predict who will succeed
on high-stakes tests (Good, Simmons, & Kameenui, 2001).
Recent research efforts have been successfully directed toward establishing reasonable growth standards for purposes of
setting both individual and program standards (Deno, Fuchs,
Marston, & Shin, 2001).
Enhancing Teacher Instructional Planning
Several related outcomes are also produced through a formative evaluation model based on CBM. In L. S. Fuchs, Deno,
and Mirkin’s (1984) study, near the end of the school year,
teachers of reading were asked whether they could identify
their students’ reading goals. It is not surprising but important
to note that those teachers using CBM in formative evaluation
were more accurate in identifying their students’ goals. In a
related study, when teachers used CBM within a formative
evaluation model, it significantly affected both the frequency
and quality of the instructional changes they made as they responded to unsatisfactory student progress (L. S. Fuchs, Fuchs,
Hamlett, & Stecker, 1991; Fuchs, Fuchs, & Hamlett, 1993).
Developing Norms
CBM can be used to develop norms for decision making when
the same CBMs are administered to normative peer samples.
In Figure 1, individual performance can be compared to the
average of peer performance, which is represented by the line
well above the target student’s level during baseline and at the
end of the year. This reference is important because it reveals
the magnitude of the difference between the performances of
individual students and those of their peers with the same
stimulus materials. Teachers can create their own peer reference by sampling the performance of other students in the
same classroom. Because CBM is standardized, it has also been
187
effectively used to create school and district norms. When
local norms are created, peer references are more broadly representative of students in the same grade, in the same school,
or across schools within a district (Marston & Magnusson,
1988; Shinn, 2002). Using CBM to create local norms has been
especially useful in urban school districts where concerns
exist regarding the degree to which the norms of commercially available standardized tests reflect the rapidly changing
diversity of student populations.
Increasing Ease of Communication
Although the effectiveness of CBM in increasing both teacher
and student awareness of goals has already been discussed, it
is important to point out that the CBM graph, with its multiple references, creates opportunities for clearer communication. It has now become common practice for teachers to use
the CBM data in parent conferences and at multidisciplinary
team meetings to provide a framework for communicating individual student status. Professional educators and parents can
easily use the CBM graph because little or no interpretation
of the scores is necessary (Shinn, Habedank, & Good, 1993).
This contrasts sharply with the complexities related to communicating the results of commercially available standardized
test scores. A simple illustration of both the ease and effectiveness of communicating about CBM data can be found in
the results of the teacher planning study mentioned earlier
(i.e., Fuchs, Deno, & Mirkin, 1984). In that study, students as
well as teachers were asked whether they knew their annual
reading goals and were asked to specify those goals. Those
students whose teachers were using CBM and formative evaluation not only expressed that they knew those goals but also
were able to accurately specify their target reading scores.
Screening to Identify Students
Academically at Risk
An increasingly common use of CBM is to screen students
who are at risk for academic failure. As mentioned previously,
because CBM procedures are standardized, they can be used
to compare individual performance to that of the group. The
use of local norms is common for this purpose, but norms are
not required. In a study by Deno, Reschly-Anderson, Lembke, Zorka, and Callender (2002), all of the students in a large
urban elementary school were given three standard CBM maze
passages and their performance was aggregated within and
across grades. The lowest 20% of the students on the CBM
maze (multiple-choice cloze) measure in each grade were considered highly at risk and were required to undergo progress
monitoring every other week with the more conventional CBM
oral reading measure. Identification of high-risk students has
now become commonplace among schools practicing CBM
(Marston & Magnusson, 1988).
188 THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
Evaluating Classroom Prereferral
Interventions
The cost and the consequences of special education are recurring issues in the literature of special education. Of particular
concern is the possibility that some students are being referred
for and placed in special education when they would succeed
in general class programs with greater accommodation by
classroom teachers. One approach to addressing this issue is
to require classroom teachers to conduct prereferral interventions, to establish that such accommodations are insufficient.
A problem with this approach has been that little useful data
have been available to appraise the effects of those prereferral data. Because CBM data are sensitive to the effects of program changes over relatively short time periods, they can be
used to aid in the evaluation of prereferral interventions. The
use of CBM in evaluating prereferral interventions is the first
component of the Problem Solving Model (Deno, 1989) that has
been implemented at both the state and district levels (Shinn,
1995; Tilly & Grimes, 1998). The Problem Solving Model enables general and special educators to collaborate in the early
stages of child study to determine with some validity that the
problems of skill development faced by a student are more
than “instructional failures.” Documentation stating that the
problem is not readily solvable by the classroom teacher becomes the basis for special education eligibility assessment.
Reducing Bias in Assessment
The Problem Solving Model using CBM has attracted attention as a means for reducing bias in the assessment process.
Because teachers typically are the source of referrals to special education, their validity as “tests” of student success in
the classroom is an issue that has been examined using CBM
(Shinn, Tindal, & Spira, 1987). Indeed, in one big city school
system, the Office of Civil Rights joined forces with the district to examine whether the CBM data used as part of the
Problem Solving Model could diminish the likelihood of minority students being inappropriately placed in special education (Minneapolis Public Schools, 2001). Data from that
school district revealed that after implementation of the
model, the proportion of non-White students referred for and
placed in special education did not substantially change, but
it became more likely that problems were addressed through
general education classroom intervention than through special education placement. In addition, students who were
placed in special education demonstrated lower achievement
test scores than they had prior to the introduction of the Problem Solving Model.
Offering Alternative Special Education
Identification Procedures
There has been widespread dissatisfaction with traditional
approaches to identifying students for special education that
rely on standardized tests of ability, achievement, or both
(Reschly, 1988). Despite this dissatisfaction, few alternatives
have been offered to replace these procedures. Over the past
20 years, the use of CBM within a systematic decision framework has been explored as a basis for developing alternative
identification procedures (Marston & Magnusson, 1988; Marston, Mirkin, & Deno, 1984; Shinn, 1989). Recently, the use
of CBM to test students’ responsiveness to treatment (L. S.
Fuchs & Fuchs, 1998) has gained favor within policy-making
groups. For example, the responsiveness to treatment approach
has been recommended by the President’s Commission on Excellence in Special Education (2002) as an alternative to traditional standardized testing for identifying students with learning
disabilities. That approach is an extension of prereferral evaluation and the Problem Solving Model to evaluate increased
levels of intensity in instructional intervention, and it relies on
CBM. For example, if a student fails to increase his or her rate
of growth in response to several general education classroom
interventions, that student might be considered as eligible for
special education. This alternative approach to eligibility determination rooted in the Problem Solving Model has created
an entirely different perspective of the concept of disability
(Tilly, Reschly, & Grimes, 1999).
Recommending and Evaluating Inclusion
As increased emphasis has been placed on inclusion of students with disabilities in general education classrooms, and
as laws and regulations have required schools to ensure access to the general education curriculum, the need to evaluate
the effects of these changes on the academic development of
students with disabilities has increased. CBM has proved to
be a very useful tool for those accountable for the progress of
students with disabilities as they seek to provide education for
these students in the mainstream curriculum. The general
strategy employed when using CBM to evaluate inclusion has
been to collect data before and after integration into general
education instruction and then to continue monitoring student progress to ensure that reintegration of students is occurring responsibly (D. Fuchs, Roberts, Fuchs, & Bowers, 1996;
Powell-Smith & Stewart, 1998). The results of the research in
this area provide clear evidence that both special educators
and classroom teachers can use CBM to provide ongoing documentation of student progress and to signal the need for increased intensification of instruction when inclusive programs
are unsuccessful.
Predicting Performance
on High-Stakes Assessment
Perhaps no other aspects of contemporary education are receiving greater attention than accountability and high-stakes
assessment. At federal and state levels, pressure is being
applied to schools to “step up” to the challenge of reform
movements rooted in testing. Schools are being placed on “pro-
THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
bation” and being threatened with the prospect of reconstitution (i.e., disbursement and replacement of the existing school
staff). In this environment, the annual high-stakes summative
evaluations have become a kind of Sword of Damocles hanging over the heads of teachers and administrators everywhere.
Whatever one might think about this approach to improving
education, one rational response has been to seek progressmonitoring data that enable school staff members to formatively evaluate programs and revise them when they appear to
be unsuccessful in helping students pass the annual highstakes tests. The criterion validity of CBM data has become
the basis for making judgments about whether students will
achieve mandated levels of performance on benchmark tests.
In a variety of studies, high correlations (.65–.85) have been
obtained between CBM scores for reading and math and performance on high-stakes assessments (cf. Deno et al., 2002;
Good, Simmons, & Kameenui, 2001; Muyskens & Marston,
2002).
A related, noteworthy aspect of the research and development in this area has been the movement from computing
simple correlations to identifying criterion levels of performance on the CBMs that teachers can use as targets for performance. Evidence has accumulated, for example, regarding
the relationship between CBM reading scores and pass rates
on state assessments. Students reading at least 40 words correctly in 1 minute by the end of first grade are on a trajectory
to succeed in learning to read, and students reading more than
110 words correctly in 1 minute by the beginning of third
grade are most likely to pass their state assessments in Oregon
(Good et al., 2001). Eighth-grade students who can read at least
145 words from local newspaper passages correctly in 1 minute are almost certain to pass the Minnesota Basic Skills Test
in reading (Muyskens & Marston, 2002). Preliminary research has been, and continues to be, conducted to identify
criterion levels of performance on the CBM maze and math
measures, as well.
Measuring Growth in Secondary School
Programs and Content Areas
CBM was developed initially to help teachers at the elementary school level increase the achievement of students struggling to learn basic skills in reading, writing, and arithmetic.
As development in those areas has proceeded, teachers in secondary school programs have become interested in the application of similar formative evaluation approaches with their
students. For that reason, technical work has proceeded on establishing CBM progress monitoring methods for assessing
student growth both in advanced academic skills and in content area learning (Espin, Scierka, Skare, & Halvorson, 1999;
Espin & Tindal, 1998). The technical developments in using
CBM methods to assess growth in reading and writing at the
secondary level have generated outcomes that appear both
promising and tentative. In general, attempts to establish the
criterion validity of the same reading and writing measures
189
that have been used at the elementary level have revealed that
those measures do correlate with important criteria (e.g., test
scores, grade point average, teacher judgment), but the correlations are not as strong as for elementary students. One exception involves a recent study conducted by Muyskens and
Marston (2002) in which correlations were high for students
in eighth grade. That research was conducted with middle
school students, rather than high school students, so it is possible that further studies will identify those upper levels of
competence for which ordinary CBMs will be effective.
Assessing English Language
Learning Students
A particular problem confronting schools in the United States
is the dramatically increasing proportion of students whose
first language is not English and who are still learning to speak
English while they are learning to read and write in English.
Commercially available standardized tests have not been useful because they have not included the full range of languages
represented among English language learning (ELL) students
within their norm samples. More significant, many achievement tests draw heavily on background knowledge of the
American culture in structuring questions. Among other problems that exist because of the lack of technically adequate procedures is how to distinguish ELL students who are having
difficulty learning because of their lack of proficiency in English from ELL students whose struggles also stem from special disabilities.
Several studies have explored the use of CBM to overcome
the problems of assessing ELL students and to monitor their
growth in mainstream classrooms. Baker and colleagues (i.e.,
Baker & Good, 1995; Baker, Plasencia-Peinado, & LezcanoLytle, 1998) have focused primarily on using the CBM reading scores of Spanish-speaking ELL students to evaluate their
progress in general education programs. That research establishes levels of reliability and validity for the CBM procedures
with ELL students in both their native and English languages
that are comparable to those of native speakers of English.
Furthermore, longitudinal analyses have revealed that students who begin with comparable proficiency in English often
acquire English language skills at very different rates. The apparent technical adequacy of CBM has led at least one urban
school system to use CBM procedures for developing norms
across reading, writing, and arithmetic on their ELL students
(M. Robinson, personal communication). CBM also has been
used to predict differences in the success rates of middle school
ELL students on state assessments as a function of their level
of reading proficiency (Muyskens & Marston, 2002). In addition, research has been conducted using CBM with students
in countries where languages other than English are spoken.
The evidence from this body of research indicates that the procedures and tasks to be used for measurement need to be consistent with formal differences in the language. For example,
oral reading can be used to measure growth in other phonetic
190 THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
languages, such as Korean, but the maze procedure appears
to be more appropriate for measuring growth in an iconic language, such as Chinese (Yeh, 1992).
Predicting Success in
Early Childhood Education
The criterion validity of CBM oral reading scores has been
sufficiently established to become an important criterion for
establishing the predictive validity of prereading measures and
the effectiveness of early literacy interventions. With the ascendant interest in the role played by phonological skills in
learning to read, the utility of scores from measures of phonological skill has been established by examining their accuracy
in predicting beginning oral reading scores (Kaminski & Good,
1996). As cited in Good, Simmons, and Kameenui (2001), evidence has developed that CBM oral reading performance at
the end of first grade is a significant indication of subsequent
reading success. Research in this area has established important linkages among measures of phonological skill in kindergarten, oral reading performance in Grades 1 through 3, and
success on state assessments.
Frequency of Use
At least two studies have been conducted to examine the factors that can function as barriers to CBM implementation
(Wesson, Deno, & King, 1984; Yell, Deno, & Marston, 1992).
More than 15 years ago, Wesson et al. (1984) found that nearly
85% of the 300 teachers surveyed reported they were aware
of direct and frequent measurement of student progress; yet,
only half of those familiar with the procedures were using
them. In Yell et al.’s (1992) study, only teachers using CBM
were surveyed. Teachers in both studies consistently identified time as the single most important barrier to implementing
the measurement procedures. An interesting related finding
was that teachers using CBM estimated that it took less than
10% of their instructional time to conduct the measurement.
Nevertheless, given the time constraints under which teachers
operate, they seem, inevitably, to believe that any additional
activity cannot be accommodated into their daily schedule.
A Uniquely Special
Education Development
CBM is a procedure that was developed by special educators
for special educators. Having said that, it is important to recognize that related work both in and outside of special education served as a basis for CBM and supported the use of
CBM in general education. All CBM procedures involve the
direct observation of behavior and use the single case analytical procedures that are characteristic of applied behavior
analysis (ABA; Deno, 1997). ABA is a system developed for
use with any behavior in any setting; thus, much of the work
of ABA has addressed behavior in the mainstream of ordinary
life. In that sense, CBM is, in part, based on procedures derived from sources outside of special education. At the same
time, the most extensive applications of ABA have been in
special education, and the early applications of ABA to academic instruction occurred most often within special education (e.g., Lovitt, 1976).
The use of CBM to measure growth in reading has relied
extensively on time-limited samples of oral reading. In the
general literature of reading development, the speed with which
students are able to translate text into spoken language is
viewed as one of the most significant characteristics of skillful reading (Adams, 1990). In addition, psychologists interested in the study of reading have long viewed automatic
responding as an essential element in reading comprehension
(Laberge & Samuels, 1974). Although oral reading fluency is
not always defined in terms of speed and accuracy of word
recognition, that use of the term is so widespread that the
recommendations from the National Reading Panel (2000)
regarding fluency have been interpreted to mean speed and
accuracy of oral reading. However oral reading fluency is defined, the current broad interest in this subject has contributed
to the rapid dissemination of CBM reading procedures. As
continued research on the relationship between rapid and accurate reading of words and comprehension reveals the close
connection between these key elements of reading (L. S. Fuchs,
Fuchs, Hosp, & Jenkins, 2001; Jenkins, Fuchs, von den Broek,
Espin, & Deno, 2002), CBM is likely to become an everincreasing source of interest from special and general educators alike. Whether similar accelerated interest in CBM will
occur for other basic skills remains to be seen.
The most effective uses of CBM in the formative evaluation of individual student programs almost certainly occur
in settings where individual (special) education teachers have
the time and skills to respond to the charted progress of individual students. Special educators designed the formative
evaluation model that has been demonstrably effective in improving the achievement of individual students for use in special education. Initially, this meant evaluating the success of
teachers at accelerating the progress of their special education
students in the mainstream curriculum. As caseload limits for
special educators have been raised or eliminated, and as inclusive education has received more attention, teachers have
had less time and too many students to use CBM effectively
in formative evaluation.
The shift, then, has been to use CBM to support general
educators’ efforts to accommodate the increased diversity in
classrooms produced, in part, by inclusion of students with
disabilities (L. S. Fuchs, Fuchs, Hamlett, Phillips, & Bentz,
1994). CBM has proven to be a useful tool for this purpose.
L. S. Fuchs and Fuchs (1998) provided an interesting and important illustration of the efforts required to tailor CBM for
use in the general education classroom. Their work clearly
demonstrates the effort that must be made to effect CBM im-
THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
plementation with teachers who are working with large groups
of students in general education classrooms. Even as those efforts are successful, it is unlikely that CBM with large groups
can contribute to improved student achievement to the degree
that it does when used to tailor individual student programs.
In those settings where special education is organized for individual students, the unique contributions of CBM most certainly will be the greatest.
REFERENCES
Adams, M. J. (1990). Beginning to read: Thinking and learning about print.
Cambridge: MIT Press.
Baker, S. K., & Good, R. H. (1995). Curriculum-based measurement of English reading with bilingual Hispanic students: A validation study with
second-grade students. School Psychology Review, 24, 561–578.
Baker, S. K., Plasencia-Peinado, J., & Lezcano-Lytle, V. (1998). The use of
curriculum-based measurement with language-minority students. In M.
R. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 175–213). New York: Guilford Press.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.
Deno, S. L. (1989). Curriculum-based measurement and alternative special
education services: A fundamental and direct relationship. In M. R.
Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 1–17). New York: Guilford Press.
Deno, S. L. (1997). “Whether” thou goest: Perspectives on progress monitoring. In E. Kameenui, J. Lloyd, & D. Chard (Eds.), Issues in educating students with disabilities (pp. 77–99). Mahwah, NJ: Erlbaum.
Deno, S. L., & Fuchs, L. S. (1987). Developing curriculum-based measurement systems for databased special education problem solving. Focus
on Exceptional Children, 19(8), 1–16.
Deno, S. L., Fuchs, L. S., Marston, D. B., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with
learning disabilities. School Psychology Review, 30, 507–524.
Deno S. L., & Mirkin, P. K. (1977). Data-based program modification: A
manual. Reston, VA: Council for Exceptional Children.
Deno, S. L., Reschly-Anderson, A., Lembke, E., Zorka, H., & Callender, S.
(2002). A model for school wide implementation: A case example. Paper
presented at the annual meeting of the National Association of School
Psychology, Chicago, IL.
Espin C. A., Scierka, B. J., Skare, S. S., & Halvorson, N. (1999). Criterionrelated validity of curriculum-based measures in writing for secondary
students. Reading and Writing Quarterly, 15, 5–28.
Espin, C. A., & Tindal, G. (1998). Curriculum-based measurement for secondary students. In M. R. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 214–253). New York: Guilford Press.
Fuchs, D., Roberts, P. H., Fuchs, L. S., & Bowers, J. (1996). Reintegrating
students with learning disabilities into the mainstream: A two-year
study. Learning Disabilities Research & Practice, 11, 214–229.
Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance
assessment be based in the curriculum? Exceptional Children, 61, 15–24.
Fuchs, L. S., Deno, S., & Mirkin, P. (1984). Effects of frequent curriculumbased measurement and evaluation on pedagogy, student achievement,
and student awareness of learning. American Educational Research
Journal, 21, 449–460.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for
reconceptualizing the identification of learning disabilities. Learning
Disabilities Research & Practice, 13, 204–219.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1993). Technological advances
linking the assessment of students’ academic proficiency to instructional
planning. Journal of Special Education Technology, 12, 49–62.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., & Bentz, J. (1994).
Class wide curriculum-based measurement: Helping general educators
191
meet the challenge of student diversity. Exceptional Children, 60,
518–537.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of
curriculum-based measurement and consultation on teacher planning and
student achievement in mathematics operations. American Educational
Research Journal, 28, 617–641.
Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical,
and historical analysis. Scientific Studies of Reading, 5, 239–256.
Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9, 20–28.
Good, R., & Jefferson, G. (1998). Contemporary perspectives on curriculumbased measurement validity. In M. R. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp 61–88). New York: Guilford
Press.
Good, R. H. III, Simmons, D. C., & Kameenui, E. J. (2001). The importance
and decision-making utility of a continuum of fluency-based indicators
of foundational reading skills for third-grade high stakes outcomes. Scientific Studies of Reading, 5, 257–288.
Jenkins, J. R., Fuchs, L. S., van den Broek, P., Espin, C. A., & Deno, S. L.
(2002). Reading fluency: Criterion validity and sources of individual
difference. Unpublished manuscript.
Kaminski, R. A., & Good, R. H. (1996). Toward a technology for assessing
early literacy skills. School Psychology Review, 25, 215–227.
Laberge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293–323.
Lovitt, T. C. (1976). Applied behavior analysis techniques and curriculum
research: Implications for instruction. In N. G. Haring & R. L. Schiefelbush (Eds.), Teaching special children (pp. 112–156). New York: McGrawHill.
Marston, D. B. (1989). Curriculum-based measurement: What it is and why
do it? In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing
special children (pp. 18–78). New York: Guilford Press.
Marston, D. B., & Magnusson, D. (1988). Curriculum-based assessment: District-level implementation. In J. Graden, J. Zins, & M. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options
for all students (pp. 137–172). Washington, DC: National Association
of School Psychologists.
Marston, D. B., Mirkin, P. K., & Deno, S. L. (1984). Curriculum-based measurement: An alternative to traditional screening, referral, and identification. The Journal of Special Education, 18, 109–117.
Minneapolis Public Schools. (2001). Report of the external review committee on the Minneapolis Problem Solving Model.
Muyskens, P., & Marston, D. B. (2002). Predicting success on the Minnesota
Basic Skills Test in reading using CBM. Unpublished manuscript, Minneapolis Public Schools.
National Reading Panel. (2000). Teaching children to read: An evidencebased assessment of the scientific research literature on reading and its
implications for reading instruction: Reports of the subgroups.
Bethesda, MD: National Institutes of Health.
Powell-Smith, K. A., & Stewart, L. H. (1998). The use of curriculum-based
measurement on the reintegration of students with mild disabilities. In
M. R. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 254–307). New York: Guilford Press.
President’s Commission on Excellence in Special Education. (2002). A new
era: Revitalizing special education for children and their families. Retrieved from http://www.ed.gov/inits/commissionsboards/whspecialeducation/reports/index.html
Reschly, D. (1988). Special education reform: School psychology revolution.
School Psychology Review, 17, 459–475.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: Guilford Press.
Shinn, M. (1995). Best practices in curriculum-based measurement and its
use in a problem-solving model. In J. Grimes & A. Thomas (Eds.), Best
practices in school psychology (Vol. 3, pp. 547–568). Silver Spring, MD:
National Association of School Psychologists.
192 THE JOURNAL OF SPECIAL EDUCATION VOL. 37/NO. 3/2003
Shinn, M. R. (Ed.). (1998). Advanced applications of curriculum-based measurement. New York: Guilford Press.
Shinn, M. (2002). Best practices in using curriculum-based measurement in
a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 4, pp. 671–697). Silver Spring, MD: National Association of School Psychologists.
Shinn, M. R., Habedank, L., & Good, R. H. (1993). The effects of classroom
reading performance data on general education teachers’ and parents’
attitudes about reintegration. Exceptionality, 4, 205–228.
Shinn, M. R., Tindal, G., & Spira, D. (1987). Special education referrals as
an index of teacher tolerance: Are teachers imperfect tests? Exceptional
Children, 54, 34–40.
Tilly, W. D., & Grimes, J. (1998). Curriculum-based measurement: One vehicle for systematic educational reform. In M. R. Shinn (Ed.), Advanced
applications of curriculum-based measurement (pp. 32–88). New York:
Guilford Press.
Tilly, W. D., Reschly, D. J., & Grimes, J. (1999). Disability determination in
problem-solving systems: Conceptual foundations and critical components. In D. J. Reschly, W. D. Tilly, & J. Grimes (Eds.), Special education
in transition: Functional assessment and noncategorical programming
(pp. 221–254). Longmont, CO: Sopris West.
Wesson, C., Deno, S. L., & King, R. (1984). Direct and frequent measurement of student performance: If it’s good for us, why don’t we do it?
Learning Disability Quarterly, 7, 45–48.
Yeh, C. (1992). The use of passage reading measures to assess reading proficiency of Chinese elementary school students. Unpublished doctoral
dissertation, University of Minnesota.
Yell, M., Deno, S. L., & Marston, D. E. (1992). Barriers to implementing curriculum-based measurement. Diagnostique, 18, 99–105.
research report
Using Curriculum-Based
Measurement to Improve
Achievement
Suzanne Clarke
A data-driven method provides the most reliable indicator of
student progress in basic academic areas.
R
esponse to intervention (RTI) is on the radar screen of most principals
these days—finding out what it is, how it can improve teaching and
learning, and what needs to be done to implement it effectively. One
critical component of RTI that will require particular attention from principals
is student progress monitoring, which is required in every tier of RTI. The most
commonly used and well-researched method of monitoring progress is
curriculum-based measurement (CBM).
Nearly 30 years of empirical evidence tells us
that CBM provides a valid and reliable indicator of
student progress in basic academic areas, especially
reading, math, and writing, and that it can have a
positive impact on student achievement (Foegen,
Jiban, & Deno, 2007; McMaster & Espin, 2007). Yet
CBM was not commonly used by teachers, particularly in general education classrooms (Hosp & Hosp,
2003; Ardoin et al., 2004), until the advent of RTI.
Research has shown that the data gathered from
CBM can be used in numerous educational decisions, such as screening, eligibility for special education, and re-integration. More recently, researchers
have been examining the effectiveness of CBM in
other areas as well, such as predicting performance
on high-stakes tests and measuring growth in content areas (Deno, 2003). Mellard and Johnson (2008)
discuss the use of CBM from an RTI perspective:
Within an RTI model, the types of decisions that a system
of progress monitoring can inform include whether a
student is making adequate progress in the general classroom, whether a student requires a more intensive level
of intervention, and whether a student has responded
30
Principal n January/February 2009
successfully to an intervention and, therefore, can be
returned to the general classroom.
What Is CBM?
“CBM is a scientifically validated form of student
progress monitoring that incorporates standard
methods for test development, administration, scoring, and data utilization” (Stecker & Lembke, 2005).
It was developed so that teachers would have measurement and evaluation procedures that they could
“use routinely to make decisions about whether and
when to modify a student’s instructional program”
(Deno, 1985).
In contrast to standardized achievement tests,
which do not provide immediate feedback, CBM
tests are given frequently to track student progress
toward annual goals, monitor the effectiveness of
interventions, and make instructional changes as
needed throughout the year. As Wright (n.d.) points
out, “much of the power of CBM … seems to lie in
its ability to predict in a short time whether an intervention is working or needs to be altered.” In an RTI
context, CBM can help identify students in need of
interventions, decide which level of intervention is
www.naesp.org
most appropriate, and determine if an
intervention is successful (Mellard &
Johnson, 2008).
Unlike classroom assessments that
test mastery of a single skill, each CBM
test samples the year-long curriculum
and, therefore, measures small student
gains toward long-term goals (Stecker,
Fuchs, & Fuchs, 2005; Deno, Fuchs,
Marston, & Shin, 2001). For example,
Bek Shakirov/Getty Images
a third-grade teacher typically tests
students on their mastery of multiplication immediately after completing that
unit. However, the math CBM would
include problems that test each skill
that students are expected to master by
the end of third grade (e.g., addition,
subtraction, multiplication, and division
problems). In this way, CBM provides
educators with an overall indicator of
student competence and progress in
the curriculum.
In addition to being an assessment
tool that allows educators to frequently
measure student progress in the yearlong curriculum, there are some additional benefits of CBM:
n
I t can provide documentation of
student progress for accountability
Principal n January/February 2009
31
purposes, including adequate yearly
progress and individualized education programs;
n It can facilitate communication
about student progress with parents
and other professionals;
n It may result in fewer special education referrals;
n It allows teachers to compare
students against other students in
the classroom, rather than against
national norms; and
n It allows schools and districts to
develop local norms that can then
guide teachers when interpreting
data (National Center on Student
Progress Monitoring, n.d.; HollandCoviello, n.d.).
How Does CBM Work?
One of the key aspects of CBM is
that the “mechanics”—how the test is
administered, the directions given to
students, the procedures for scoring—
are standardized (Deno & Fuchs,
1987). Standardization is important
because it ensures that the data are
valid and reliable indicators of a student’s proficiency, allows for individual
and group data to be compared across
time, and facilitates the development
of local norms (Deno, 2003; Wright,
n.d.).
CBM probes, or tests, are easy and
quick to administer, and are generally
given once or twice per week. Each test
is different but of equivalent difficulty.
“Because CBM converts student academic behaviors into numbers (e.g.,
number of words read correctly per
minute), teachers can easily chart the
resulting data over time” (Wright, n.d.)
and see when instructional changes
need to be made. For example, the
oral reading fluency probe has students
read aloud from a passage for one
minute as the teacher follows along,
marking words that are read incorrectly.
The number of words read correctly is
recorded and graphed. It takes approximately five minutes to administer, score,
and graph the result.
On a CBM graph, baseline data
indicate a student’s initial level of
proficiency and a goal line is drawn
32
Principal n January/February 2009
to connect the baseline data to the
desired year-end proficiency level. Following each CBM test, the teacher plots
the student’s score on the graph to
determine whether the student is scoring above or below the goal line, and
uses a predetermined rule to decide if
instruction needs to be modified. Using
the four-point rule, for example, the
teacher looks at the four most recent
of the first six data points. If all four are
above the goal line, the teacher raises
the goal; if all four fall below the goal
line, an instructional change may need
to be implemented (Stecker, Fuchs, &
Fuchs, 1999).
Researchers have proposed several
decision rules, in addition to the fourpoint rule, that educators can use
to determine if a teaching change is
needed. It is critical that one rule is chosen and then applied consistently across
time and among all students being
monitored.
Teacher Support and Training
Studies have shown that teachers who
use CBM to monitor progress, adjust
instruction, and determine the effectiveness of interventions have higher rates
of student achievement and learning
than those who do not use CBM (Bowman, 2003; Mellard & Johnson, 2008;
Hosp & Hosp, 2003). As a principal,
what factors do you need to consider
and address to help prepare and support teachers in the use of CBM?
One research review that examined
the effect of CBM on the achievement
of students with learning disabilities
concluded the following:
rogress monitoring alone will not
P
have a significant impact on student
achievement. Teachers must modify
their instruction based on what the
data indicate.
n The use of data-based decision rules
is important and they should be
n
Bek Shakirov/Images.com
used by teachers to make necessary
instructional changes.
n Computer applications that collect,
store, manage, and analyze data
make using CBM more efficient and
contribute to teacher satisfaction.
n Ongoing teacher support, including
a system that provides teachers with
instructional recommendations,
may be needed (Stecker, Fuchs, &
Fuchs, 2005).
While CBM is designed to be timeefficient for teachers, it is important
to note that time is cited as the biggest
barrier to its implementation. Teachers need time for training and practice
in all aspects of CBM, such as how to
administer the various probes, how to
set annual performance goals, and how
to analyze graphs.
Actually using the data to make
instructional changes may be one of
the most difficult steps for teachers.
Wesson (1991) suggests that as districts
train teachers to use CBM, they should
encourage them to meet regularly with
one another, rather than with outside
experts, to discuss what they are finding.
A “seamless and flexible system of
progress monitoring” (Wallace, Espin,
McMaster, Deno, & Foegen, 2007)
remains a goal of researchers. In the
meantime, three decades of study
have produced a significant research
base of reliable and valid CBM measures that schools can use to monitor
student progress and support RTI
implementation. P
Suzanne Clarke is an issues analyst at
Educational Research Service. Her e-mail
address is sclarke@ers.org.
References
Ardoin, S. P., Witt, J. C., Suldo, S. M.,
Connell, J. E., Koenig, J. L., Resetar,
J. L., et al. (2004). Examining the
incremental benefits of administering a
maze and three versus one curriculumbased measurement reading probes
when conducting universal screening.
School Psychology Review, 33(2), 218-233.
Bowman, L. J. (2003). Secondary educators
promoting student success: Curriculumbased measurement. Retrieved February
7, 2008, from http://coe.ksu.edu/esl/
www.naesp.org
lasestrellas/presentations/Lisa_CBM.
ppt
Deno, S. L. (1985). Curriculum-based
measurement: The emerging alternative.
Exceptional Children, 52(3), 219-232.
Deno, S. L. (2003). Developments in
curriculum-based measurement. The
Journal of Special Education, 37(3), 184192.
Deno, S. L., & Fuchs, L. S. (1987).
Developing curriculum-based
measurement systems for data-based
special education problem solving. Focus
on Exceptional Children, 19(8), 1-16.
Deno, S. L., Fuchs, L. S., Marston, D.,
& Shin, J. (2001). Using curriculumbased measurement to establish growth
standards for students with learning
disabilities. School Psychology Review, 30(4),
507-524.
Foegen, A., Jiban, C., & Deno, S. (2007).
Progress monitoring measures in
mathematics. The Journal of Special
Education, 41(2), 121-139.
Holland-Coviello, R. (n.d.). Using curriculumbased measurement (CBM) for student progress
monitoring. Retrieved August 26, 2008,
from www.studentprogress.org/player/
playershell.swf
Hosp, M. K., & Hosp, J. L. (2003).
Curriculum-based measurement for
reading, spelling, and math: How to do it
and why. Preventing School Failure, 48(1),
10-17.
McMaster, K., & Espin, C. (2007).
Technical features of curriculum-based
measurement in writing: A literature
review. The Journal of Special Education,
41(2), 68-84.
Mellard, D. F., & Johnson, E. (2008). RTI:
A practitioner’s guide to implementing
response to intervention. Thousand Oaks,
CA: Corwin Press and Alexandria, VA:
National Association of Elementary
School Principals.
National Center on Student Progress
Monitoring. (n.d.). Common questions for
progress monitoring. Retrieved August 26,
2008, from www.studentprogress.org/
progresmon.asp#2
Stecker, P. M., Fuchs, L. S., & Fuchs, D.
(1999). Using curriculum-based measurement
for assessing reading progress and for making
instruction decisions. Retrieved February
25, 2008, from www.onlineacademy.org/
modules/a300/lesson/lesson_3/xpages/
a300c3_40200.html
Stecker, P. M., Fuchs, L. S., & Fuchs,
D. (2005). Using curriculum-based
measurement to improve student
achievement: Review of research.
Psychology in the Schools, 42(8), 795-819.
Stecker, P. M., & Lembke, E. S. (2005).
Advanced applications of CBM in reading:
Instructional decision-making strategies
manual. Washington, DC: National
Center on Student Progress Monitoring.
Wallace, T., Espin, C. A., McMaster, K.,
Deno, S. L., & Foegen, A. (2007). CBM
progress monitoring within a standardsbased system. The Journal of Special
Education, 41(2), 66-67.
Wesson, C. L. (1991). Curriculum-based
measurement and two models of followup consultation. Exceptional Children,
57(3), 246-256.
Wright, J. (n.d.). Curriculum-based
measurement: A manual for teachers.
Retrieved February 5, 2008, from
www.jimwrightonline.com/pdfdocs/
cbaManual.pdf
W eb Resou rc es
On the National Center on
Student Progress Monitoring Web
site, educators can download articles,
presentations, and webinars that
address a range of CBM topics, such as
improving instruction with CBM and
using progress monitoring to develop
individualized education programs. The
site also has a section for families, which
includes resources that can be used with
parents to explain progress monitoring.
www.studentprogress.org
This site offers step-by-step manuals
and guides on how to use CBM,
including how to administer the various
probes and graph the results. It also
includes forms for recording data and
scoring sheets that can be downloaded.
www.interventioncentral.
org/htmdocs/interventions/
cbmwarehouse.php
The Research Institute on Progress
Monitoring Web site includes
an in-depth training manual for
implementing reading CBM and
procedures for scoring writing samples.
www.progressmonitoring.net
Principal n January/February 2009
33
Purchase answer to see full
attachment