Exceptional Children
Vol. 79. No. 2, pp. 135-144.
©20 J 3 CouncilforExceptional Children.
Evidence-Based Practices
and Implementation Science
in Special Education
BRYAN G. COOK
University of Hawaii
SAMUEL L. ODOM
University of North Carolina at Chapel Hill
ABSTRACT:
r:
Establishing a process for identifying evidence-based practices (EBPs) in special education has been a significant advance for the field because it has the potential for generating more
effective educational pro-ams and producing more positive outcomes for students with disabilities.
However, the potential benefit of EBPs is bounded by the quality, reach, and maintenance of
implementation. The cross-disciplinary field of implementation science has great relevance for
translating the promise of EBPs into positive outcomes for children and youth with disabilities.
This article examines the history, extent, and limitations of EBPs and describes the emergence and
current state of implementation science as applied in special education. Subsequent articles in this
special issue i?/Exceptional Children address a range of issues related to implementation science in
special education: the research-to-practice gap, dissemination and diffusion, adherence and sustainability, scaling up, a model for state-level implementation, and fostering implementation through
professional development.
ducators generally agree that
broad implementation of practices shown by scientific
research to reliably cause
increased student performance
(i.e., evidence-based practices; EBPs) will result in
increased student outcomes (Cook, Smith, &:
Tankersley, 2012; Slavin, 2008b). Despite special
educators' general affinity for the concept of
EBPs, as Odom and colleagues (2005) suggested,
the devil of EBPs is in the details. Odom et al.
were referring to the difficulties involved in identifying EBPs (e.g.. How many studies must support
an EBP? What research designs should be consid-
Exceptional Children
ered? What quality indicators are necessary for a
trustworthy study? What effects must a practice
have to be considered an EBP?). Although these
issues continue to be debated (see Cook, Tankersley, & Landrum, 2009a; Slavin, 2008a), there has
been considerable progress in generating and applying guidelines for identifying EBPs in general
(e.g.. What Works Clearinghouse, WWC, 2011)
and special (e.g.. National Professional Development Center on Autism Spectrum Disorders, n.d.;
National Secondary Transition Technical Assistance Center, n.d.) education. However, the EBP
movement may have leapt from the frying pan
into the fire: The progress made in identifying
EBPs has highlighted the devilish details involved meaningful effects on student outcomes. The
with implementation of EBPs, which now need to logic behind EBPs is simple: Identifying and
using the most generally effective practices will
be addressed.
The gap—described by some as a chasm (e.g., increase consumer (e.g., student) outcomes. This
Donovan &C Cross, 2002)—between research and logic rests on the assumptions that the most effecpractice is a recurring theme in special education. tive practices were not previously identified, imIndeed, we suspect that the gap has been present plemented, or both; and that certain types of
in special education as long as research and prac- research (i.e., high-quality studies using designs
tice have co-existed. Attempts to bridge the re- from which causality can be inferred) are the best
search-to-practice gap by identifying and tools to determine effectiveness. Although not
implementing effective practices are a rich part of without detractors (e.g., Gallagher, 2004; Hamspecial education's history (Mostert & Crockett, mersley, 2005) this logic has been generally
1999-2000). Despite considerable focus on the accepted (Slavin, 2008b) and even written into
research-to-practice gap (e.g., Carnine, 1997; law (i.e., the No Child Left Behind Act of 2001's
Greenwood 8¿ Abbott, 2001) and on identifying emphasis on "scientifically based research").
EBPs as means to bridge it (e.g.. Cook et al.,
Unlike previous approaches for identifying ef2009b; Odom et al., 2005), there is little evidence fective practices in education (e.g., best practices,
suggesting that the gap has been meaningfully re- research-based practices), supporting research for
duced. For example, a U.S. Department of Educa- EBPs must meet prescribed, rigorous standards
tion report (Crosse et al., 2011) noted that only (Cook & Cook, 2011). Although specific stan7.8% of prevention programs related to substance dards for EBPs vary between and within fields,
abuse and school crime used in over 5,300 schools research support for EBPs generally must meet
met their standards for an EBP. And, in special ed- standards along several dimensions, including
ucation, practitioners have reported using instruc- research design, quality, and quantity. Typical
tional practices shown by research to be ineffective guidelines require that for a practice to be consid(e.g., learning styles) with similar or greater fre- ered evidence-based it must be supported by
quency than some research-based practices (e.g., multiple, high-quality, experimental or quasimnemonics; Burns &C Ysseldyke, 2009).
experimental (often including single-case research)
This special issue of Exceptional Children fo- studies demonstrating that the practice has a
cuses on addressing some of the devilish details meaningful impact on consumer (e.g., student)
related to bridging the research-to-practice gap by outcomes.
Discussion and promotion of EBPs have beachieving broad, sustained, and high-quality implementation of EBPs. There is an emerging field come seemingly ubiquitous in education in recent
of implementation science (Eccles & Mittman, years (Detrich, 2008)—EBPs are promoted in na2006) that can be applied in special education to tional, state, and local educational policies; in proenhance the utilization of EBPs. To contextualize fessional conferences, university courses, and
consideration of implementation science related professional development; in professional stanto EBPs in special education, it's important to dards; and in informal discussions among educadefine what an EBP is, as well as to be aware of tors. The federally funded WWC (http://ies.ed
critical caveats and controversies related to EBPs .gov/ncee/wwc/), established in 2002, is perhaps
the most comprehensive and well known source of
in the field of special education.
EBPs for education. Until recently, however, the
WWC did not identify EBPs for students with disEV I D E N C E - B A S E D
P R A C T I C E S
abilities, and now does so only for certain disability
groups. (The WWC has begun reviewing the eviWHAT ARE
EBPS?
dence base of practices for students with learning
Emerging from the field of medicine in the early disabilities, in early childhood special education,
1990s (Sackett, Rosenberg, Gray, Haynes, & and with emotional and behavioral disorders.)
To address the need for standards for EBPs
Richardson 1996), EBPs are practices and prodesigned
for and by special educators, Gersten et
grams shown by high-quality research to have
136
Winter 2013
al. (2005) and Horner et al. (2005) generated
standards for identifying EBPs in special education using group experimental and single-subject
research, respectively, in a special issue of Exceptional Cbildren (Odom, 2005). Since that special
issue, various organizations and teams of special
education scholars have used the standards proposed by Gersten et al. and Horner et al. (2005;
e.g., Cook et al., 2009a), used standards adapted
from Gersten et al. and Horner et al. (Odom,
Collet-Klingenberg, Rogers, & Hatton, 2010),
and developed independent sets of standards (e.g..
National Autism Center, 2009) to begin to identify a corpus of EBPs in special education. These
and other ongoing efforts to establish EBPs in
special education represent an important advance
for the field. However, EBPs are not a panacea,
and considerable and fundamental work remains
to be done if they are to meaningfully improve
outcomes for children and youth with disabilities.
mated that the most effective early reading interventions do not positively impact between 2%
and 6% of children. Researchers typically refer to
students for whom effective practices do not cause
meaningfully improved outcomes as treatment
resisters or nonresponders. Although EBPs have
relatively low rates of nonresponders, it is important to recognize that even when implemented
with fidelity and over time EBPs will not result in
optimal outcomes for all students. Thus, when selecting practices to use in special educatiorl programs, EBPs are a good place to start; but the
application of an EBP, like any other instructional
practice, represents an experiment of sorts in
which special educators must validate its effectiveness for each individual child.
CAVEATS AND CONTROVERSIES
Although more and more EBPs are being identified in both general and special education, because of the considerable time and expertise it
takes to complete an evidence-based review (i.e.,
apply standards for EBPs to the body of research
literature examining the effectiveness of a practice) many practices have not yet been reviewed.
And because of the relative scarcity of highquality, experimental research in the educational
literature (Berliner, 2002; Seethaler & Euchs,
2005), many evidence-based reviews result in the
conclusion that there is simply not enough highquality research utilizing appropriate designs to
meaningfully determine whether a practice is evidence-based. In other words, just because a practice is not considered an EBP does not necessarily
mean that it is ineffective. It is then important to
distinguish between practices that are not considered evidence-based because (a) they have been
shown by multiple, high-quality research studies
from which causality can be inferred to be ineffective and (b) an evidence-based review has not
been conducted or there is insufficient research to
conclusively determine whether the practice is
effective (Cook & Smith, 2012). The former
practices should rarely if ever be used, whereas the
latter might be implemented when relevant EBPs
have not been identified or a student has been
shown to be a nonresponder to identified EBPs.
No practice will work for every single
student; this is a reality of education.
Incomplete and Variable Identification of EBPs.
The introduction of EBPs in any field seems to be
inexorably followed by a period of questioning
and resistance, which certainly has occurred in
education (e.g., Hammersley, 2007; Thomas &
Pring, 2004). Although a complete discussion of
caveats and controversies regarding EBPs in (special) education are beyond the scope of this article
(see Cook et al., 2012 for an extended discussion), we focus our attention here on a few
prominent issues of which special educators
should be aware: EBPs are not guaranteed to
work for everyone, identification of EBPs is incomplete and variable, and EBPs will not be implemented automatically or easily in the "real
world" of schools and classrooms.
EBPs Are Not Guaranteed to Work for Every-
one. No practice will work for every single student; this is a reality of education (indeed, for all
social sciences) of which special educators are
keenly aware. As such, when educational researchers speak of causality, they do so in a probabilistic rather than absolute sense. That is, saying
that an instructional practice causes improved educational outcomes means that the practice reliably results in improved outcomes for the vast
majority, but not all, students who receive the intervention. Eor example, Torgesen (2000) esti-
Exceptional Children
Special educators also should recognize that
there are many difFerent approaches for identifying
and categorizing EBPs. For example. Homer et al.
(2005) proposed dichotomously categorizing practices (i.e., evidence-based or not evidence-based),
Gersten et al. (2005) proposed a three-tiered approach (i.e., evidence-based, promising, and not
evidence-based), and the W W C (2011) uses six
classifications (i.e., practices with positive, potentially positive, mixed, indeterminate, potentially
negative, and negative effects) to categorize the evidence base of practices. Moreover, approaches for
identifying EBPs in education vary on specific
standards For research design, quality of research,
quantity of research, and efFect size (see Cook et
al., 2012, For an extended discussion). Accordingly, the evidence-based status oF some practices
will likely vary across EBP sources (Cook & Cook,
2011). It is important, then, to consider EBPs
within the context oFthe specific standards used to
identify them.
Implementation. The research-to-practice gap
underlies what is probably the most vexing caveat
related to EBPs: the diFficulty in translating research findings to the everyday practices oF teachers in typical classrooms. As EBPs in education
began to be identified, relatively little attention
was given to how to implement them, perhaps
under the assumption that school personnel would
eagerly and readily apply identified EBPs. However, as Fixsen, Blase, Homer, and Sugai (2009)
noted, "choosing an evidence-based practice is one
thing, implementation oF that practice is another
thing altogether" (p. 5). The problem oF implementation is not unique to EBPs and likely underlies the generally d i s a p p o i n t i n g outcomes
associated with most school reForm eFForts (e.g.,
Sarason, 1993). Implementing and sustaining new
practices involves a host oF complex and interrelated problems, including issues related to the
practice being promoted (e.g., relevance and fit to
target environment, eFFiciency and practicality),
users (e.g., available time, mistrust oF research,
knowledge oF EBPs, skills), and the institutional
context (e.g., available resources, organizational
structures and culture, stafTlng, coaching, training,
administrative support; Fixsen, Naoom, Blase,
Friedman, & Wallace, 2005; Nelson, LeFfler &
Hansen, 2009; Tseng, 2012).
138
Implementation issues have been reFerred to
as "wicked" problems (e.g., Fixsen, Blase, Duda,
Naoom, ôqVan Dyke, 2009; Signal et al., 2012)
because, among other characteristics, they are
moving targets that fight back (Rittel S¿ Webber,
1973). For example, Fixsen, Blase, Metz, and Van
Dyke (this issue) noted that organizational systems work to sustain the status quo by "overwhelm[ing] virtually any attempt to use new
evidence-based programs" (i.e., fight back). In
contrast, tame issues may be complex but they
tend not to change or actively resist being solved.
As diFFicult as it may be to address the tame issue
oF how to identify EBPs, it is a fixed, circumscribed issue that once solved, stays solved. It is
hardly surprising, then, that typical, passive approaches For promoting the implementation oF
EBPs (e.g., "train and hope") that do not provide
systematic and ongoing supports almost invariably Fail to address the wicked problems oF implementation and thereFore seldom result in broad,
sustained change (Fixsen et al., 2005).
Implementation is the critical link between
research and practice. Fixsen et al. (this issue) proposed a simple Formula to represent the critical
interaction oF research eFFicacy and practice (implementation) in generating outcomes:
Effective interventions X effective implementation =
improved outcomes
The implication oF this Formula is that in the absence oF implementation, even the most eFFective
intervention will not yield desired outcomes.
Glasgow, Vogt, and Boles (1999) conceptualized
the slightly more elaborate RE-AIM Framework to
represent the importance oF multiple dimensions
oF implementation in determining a practice's
real-world impact. The RE-AIM model considers
Four aspects oF implementation in addition to a
practice's eFFicacy in determining impact—R X E
X A X I X M = impact, where:
•
Reach: the proportion oF the target population reached by a practice.
•
EFficacy: the success rate oF a practice when
implemented appropriately.
•
Adoption: the proportion oF targeted settings
that adopt the practice.
Winter 2013
•
Implementation: the proportion of interventionists who implement the practice with
fidelity in real world settings.
•
Maintenance: proportion of organizations
(e.g., schools) and interventionists (e.g.,
teachers) who maintain implementation of
the practice over time.
Imagine, for example, that a school district
adopts an EBP for its students with learning disabilities in elementary schools. District personnel
are understandably excited to begin the new year
by rolling out a practice that has been shown by
multiple, high-quality studies to meaningfully improve outcomes for, say, 95% of elementary children with learning disabilities. However, only
80% of elementary schools agree to participate in
the project (reach). Further, given problems related to training, planning and instructional time,
and reluctance to adopt new practices, only 70%
of teachers within targeted schools end up using
the practice at all (adoption). Due to sometimes
ineffectual training and lack of ongoing support,
perhaps only 60% of teachers who adopt the
practice implement it with fidelity; and only 50%
of those maintain their use of the practice over
the entire school year. In this scenario, actual impact is calculated as
practice that is broadly implemented remains an
ineffective practice that will, at best, have limited
impact. When considering the importance of implementation, educators should not disregard the
importance of efficacy, but rather realize the symbiotic relationship of efficacy and implementation
in determining impact.
The recent emphasis on EBPs in special education is laudable, encouraging, and necessary, but
identification of EBPs is insufficient without supporting their use in common practice (Odom,
2009). The challenge is how to achieve high levels
of implementation of the most effective practices.
Unfortunately, because sound research investigating implementation has been sparse, "we are faced
with the paradox of non-evidence-based implementation of evidence-based programs" (Drake,
Gorman, & Torrey; as cited in Fixsen et al., 2005,
p. 35). Special educators do not yet have complete, empirically substantiated guidelines for supporting implementation of EBPs. The emerging
field of implementation science has begun to
address this issue by conducting research and generating theories regarding the implementation of
EBPs.
I M P L E M E N T A T I O N
.95 (efficacy) X .80 (reach) X .70 (adoption) X
.60 (implementation) X .50 (maintenance) = .16
In other words, due to problems at various levels
of implementation, the EBP actually had the desired impact on slightly less than 16% of elementary students with learning disabilities—a far cry
from the rosy 95% efficacy that district administrators found so attractive.
After considering these numbers, it may
seem that special educators would be better served
by pursuing practices that appeal to teachers and
are easily implemented, but which are less effective (i.e., typical practice), than by chasing the
large effects of EBPs that may be difficult to realize. However, special educators sell themselves
short—and, more important, do a disservice to
the students they serve—by settling for practices
with limited effects. Efficacy and implementation
both set a ceiling for real-world impact. Just as a
highly efficacious intervention that is not implemented will have no real effect, an ineffective
Exceptional Children
SCIENCE
In the inaugural issue of Implementation Science,
Eccles and Mittman (2006) defined implementation science as "the scientific study of methods to
promote the systematic uptake of research findings and other evidence-based practices into routine practice" (p. 1). A number of related terms
have been used to refer to this area of study (e.g.,
knowledge utilization, knowledge transfer, knowledge translation, implementation research, translational research, diffusion, uptake; Straus, Tetroe, &
Graham, 2009). We use implementation science
because, in our experience, it is the most frequently used term by contemporary education
scholars. This is not meant to suggest that a
definitive corpus of knowledge has been established in the area of implementation (i.e., a science of implementation); rather, it denotes a field
of scientific inquiry in which issues related to
implementation are investigated.
Implementation science, which draws on a
rich history of foundational research investigating
139
implementation in various fields (e.g., Rogers,
1962; see Weatherly & Lipsky, 1977, for an
example in special education), is associated most
closely with the second of two phases of translation research. The first phase of translating
research into practice involves the relatively neat,
orderly, and relatively well funded, endeavors of
conducting and synthesizing applied research to
determine what works in real-world settings (i.e.,
establishing EBPs; Hiss, 2004). Hiss suggested
that Phase 2 translation research, which investigates adopting and sustaining the EBPs identified
in Phase 1 translation research, tends to be messy
and poorly funded. However, with the recent
increase in attention being paid to implementation (or lack thereof), funding appears to be
increasing. For example, the W. T. Grant foundation recently funded 15 research projects in general education designed to examine how research
is used to inform policy and practice in local
schools (Tseng, 2012).
Essentially, the goal of inquiry in implementation science is to research and understand how
innovations are adopted and maintained, so that
implementation moves from "letting it happen"
to "making it happen" (Greenhalgh, Robert, MacFarlane, Bate, Si Kyriakidou, 2004). As has been
the case with the vast majority of previous education reforms, letting EBPs happen (i.e., assuming
that they will be implemented by virtue of their
identification) has proven largely unsuccessful
(Tseng, 2012). To bring about the broad and sustained implementation of EBPs, special educators
need to (a) look to the lessons learned thus far
from implementation science and (b) identify
what is not known about making EBP implementation happen and condtict research to systematically fill those gaps in our knowledge base.
Based on their comprehensive review of the
literature in implementation science, Fixsen et al.
(2005) concluded that the relatively sparse experimental research in implementation science indicates that providing guidelines, policies,
information, and training are not enough to
"make it happen." In contrast, long-term, multilevel strategies tend to result in successful implementation. The authors gleaned seven core
implementation components (or implementation
drivers) that, when in place and functioning at a
high level, can routinely change and improve
14O
practitioner behavior related to the implementation of EBPs: staff selection, pteservice and inservice training, ongoing consultation and coaching,
staff evaluation, ptogram evaluation, facilitative
administrative support, and systems interventions
(i.e., "strategies to work with external systems to
ensure the availability of the financial, organizational, and human resources required to support
the work of the practitioners," p. 29). They suggested that purveyors—change agents who are
experts at identifying and addressing obstacles to
implementation—are critical for utilizing core
implementation components to achieve broad
and sustained implementation of EBPs.
Schoolwide positive behavior support
(SWPBS) is a good example of a program used in
special education that incorporates lessons from
implementation science into its design (see Mclntosh. Filter, Bennett, Ryan, & Sugai, 2010). Indeed, SWPBS implementation is guided by a
model incorporating five principles drawn from
implementation science: contextual fit, priority,
effectiveness, efficiency, and using data for continuous regeneration (Mclntosh, Horner, & Sugai,
2009). For example, SWPBS practices ate modified to maximize fit with the environment in
which they will be implemented, although modifications are made with a strong understanding of
SWPBS such that they do not violate the integrity
of core components of the intervention (i.e.,
fidelity with flexibility; see Harn, Parisi, &
Stoolmiller, this issue). Moreover, SWPBS frequently utilizes structures such as state leadership
teams that lead and coordinate training, coaching,
and evaluation to systematically support and scale
up SWPBS (see Fixsen et al., this issue; Sugai &
Horner, 2006). Such attention to the principles of
implementation science has, no doubt, contributed to SWPBS's extensive, sustained, and effective
application (e.g., Horner, Sugai, & Anderson,
2010).
Fixsen et al. (2005) defined implementation
broadly: "activities designed to put into practice
an activity or program" (p. 5). Thus, virtually any
activity involved in the implementation process
might be considered under the purview of implementation science. The topics addressed in this
special issue of Exceptional Children (i.e., a theoretical framework for linking research to practice,
dissemination, balancing fidelity with fiexibility
Winter 2013
and fit, scaling-up implementation efforts,
statewide implementation efforts, and professional development) are by no means exhaustive
of the many and varied elements of implementation science that have application for special education. We have included topics that represent
what we believe to be among the most critical
areas for improving the implementation of EBPs
in special education.
We have included topics that
represent what we believe to be
among the most critical areas for
improving the implementation
of EBPs in special education.
ARTICLES
SPECIAL
IN
THIS
ISSUE
The purpose of this special issue, and each of the
articles in it, is two-fold: (a) review emerging evidence in the area of implementation science that
special education scholars, policy makers, administrators, and other stakeholders can apply to
advance the implementation of EBPs and (b) provide a framework for identifying unanswered
questions for future research to explore related to
implementation of EBPs in special education. In
the first article. Smith, Schmidt, Edelen-Smith,
and Cook propose a conceptual framework for
understanding and bridging the research-to-practice gap. Drawing from Stoke's (1997) Pasteur's
quadrant model, they posit that rather than dichotomizing research as either rigorous or relevant, research must be both rigorous and relevant
to be translated into practice and positively impact student outcomes. Smith et al. propose that
educational design research conducted within
communities of practices is a promising approach
for conducting relevant and rigorous inquiry that
will facilitate implementation of EBPs.
One of the critical stages of translating research to practice is disseminating and diffusing
EBPs. Unfortunately, EBPs are primarily disseminated in traditional and passive ways (e.g., journal
articles, research briefs) that hold little sway with
the practitioners who actually implement the
practices. In the second article. Cook, Cook, and
Exceptional Children
Landrum explore a variety of approaches for
actively and effectively disseminating researchvalidated practices. They utilize Heath and
Heath's (2008) SUCCESs model, which posits
that dissemination efforts that "stick" are simple,
unexpected, concrete, credible, emotional, and
conveyed as stories. They provide theoretically
and empirically validated dissemination approaches that might be utilized and researched
further by special educators in each of these areas.
If practitioners do not implement EBPs with
fidelity or as designed, the practices may not have
the same positive effect demonstrated in research
studies. However, in the third article, Harn, Parisi,
and Stoolmiller note that demanding rigid adherence to predetermined procedures will decrease
the likelihood that practitioners will adopt and
sustain a practice. Moreover, practitioners being
more concerned with adherence than meeting the
needs of their students may actually decrease EBP
effectiveness. Ham et al. discuss different aspects
of the multifaceted construct of implementation
fidelity and how programs and practices can be
designedflexiblyso that they can be implemented
with fidelity but still meet the needs of different
students in varying educational contexts.
In the fourth article. Klingner, Boardman,
and McMaster discuss issues related to scaling up
EBPs. The issue of scale is of critical importance in
implementation science. Although implementing
an EBP in a single school will positively impact
the outcomes of a limited number of students
with disabilities, if implementation of EBPs is
addressed one school at a time, the research-topractice gap is likely to remain wide. Klingner et
al. propose a model of scaling up at the district
level that involves district—researcher partnerships,
integrating new practices with other district initiatives, tailoring the EBP to the districts' needs, enlightened professional development that includes
team building and coaching, and district leadership that ensures communication with school personnel. They also provide an example of the
model in practice.
In the fifth article, Eixsen and colleagues
from the National Implementation Research Network (NIRN) apply a model of implementation
science to address the problem of promoting programs that utilize EBPs for students with disabilities. They emphasize the importance of building
an infrastructure at the state level, and propose a
framework that involves external systems change
support, the creation of an executive management
team, a process for training and support that
flows from policy to practice levels, and, of particular importance, a feedback loop that incorporates information from the practice level into
ongoing planning to support implementation.
The sixth article relates Odom, Cox, Brock
and the NPDC Research Group's design of a professional development program supporting improvement in program quality and practitioners'
use of EBPs for students with autism spectrum
disorders, which followed an implementation science process based on the work of Eixsen, NIRN,
and others. The process begins with developing a
planning team at the state policy level, selecting a
team or teams for providing technical assistance
and coaching, providing training to practitioners
and technical assistance providers together, and
transferring control from professional development projects (i.e., Fixsen et al.'s, this issue, external systems change support) to state providers.
CONCLUSION
and student outcomes. This special issue is, we
believe, a fitting conclusion to what is now a trilogy of Exceptional Cbildren special issues on
EBPs. Implementation is the next, and arguably
most critical, stage of evidence-based reforms.
REFERENCES
Berliner, D. C. (2002). Educational research: The hardest science of all. Educational Research, 31(8), 18-20.
http://dx.doi.org/10.3102/0013189X031008018
Burns, M. K., & Ysseldyke, J. E. (2009). Reported
prevalence of evidence-based instructional practices in
special education. The fournal of Special Education, 43,
3-11. http://dx.doi.0rg/lO.l 177/0022466908315563
Carnine, D. (1997). Bridging the research-to-practice
gap. Exceptional Children, 63, 513-521.
Cook, B. G., & Cook, S. C. (2011). Unraveling evidence-based practices in special education, fournal of
Special Education. Advance online p u b l i c a t i o n .
http://dx.doi.0rg/lO.l 177/0022466911420877
Cook, B. G., & Smith, G. J. (2012). Leadership and
instruction: Evidence-based practices in special education. In J. B. Crockett, B. S. Billingsley, & M. L.
Boscardin (Eds.), Handbook of leadership in special education (pp. 281-296). London, England: Routledge.
Cook, B. G., Smith, G. J., & Tankersley, M. (2012).
We have been involved with two previous special Evidence-based practices in education. In K. R. Harris,
issues of Exceptional Children that, respectively, set S. Graham, & T. Urdan (Eds.), APA educational psyforth guidelines for identifying EBPs in special chology handbook, volume 1 (pp. 493-528). Washingeducation (Odom, 2005) and applied those ton, DC: American Psychological Association.
guidelines to a variety of research bases in special Cook, B. G., Tankersley, M., & L a n d r u m , T. j .
education to identify EBPs in our field (Cook et (2009a). Determining evidence-based practices in speal., 2009b). We believe that this work has helped cial education. Exceptional Children, 75, 365-383.
to advance the potential role of research in prac- Cook, B. G., Tankersley, M., Ô£ Landrum, T. J. (Eds.).
tice, although the actual impact of EBPs on the (2009b). Evidence-based practices for reading, math,
outcomes of children and youth with disabilities writing, and behavior [Special issue]. Exceptional Chilis unavoidably bounded by implementation. dren, 75.
EBPs cannot have an impact unless they are im- Crosse, S., Williams, B., Hagen, C. A., Harmon, M.,
plemented. Difficulties with implementation are Ristow, L., DiGaetano, R., . . . Derzon, J. H. (2011).
not unique to education, and they have spawned Prevalence and implementation fidelity of research-based
an emerging multidisciplinary field of implemen- prevention programs in public schools: Final report. Washtation science, dedicated to better understanding ington, DC: U.S. Department of Education. Retrieved
from http://www2.ed.gov/rschstat/eval/other/researchhow to translate research knowledge into practice.
based-prevention.pdf
It will be important for special education scholars
Detrich, R. (2008). Evidence-based, empirically supto understand and apply relevant lessons from the
ported, or best practice? A guide for the scientist-practiimplementation science literature, some of which tioner. In J. K. Luiselli, D. C. Russo, W. P. Christian,
are included in this special issue, to realize the & S. M. Wilczynski (Eds.), Effective practices for chilfruits of their labor related to EBPs; that is, to dren with autism (pp. 3—25). Oxford, England: Oxford
translate research findings into improved practice University Press.
Winter 2013
Donovan, M. S., Sí Cross, C. T. (Eds.) (2002). Minority students in special and ffßed education. Washington,
DC: National Academies Press.
Hammersley, M. (2005). Is the evidence-based practice
movement doing more good than harm? Reflections on
lain Chambers' case For research-based policy making
Eccles, M. P., & Mittman, B. S. (2006). Welcome to
a n d p r a c t i c e . Evidence
Implementation
http://dx.doi.Org/l 0.1332/1744264052703203
Hammersley, M. (Ed.). (2007). Educational research
and evidence-based practice. Thousand Oaks, CA: Sage.
Science. Implementation
Science,
l(\),
1—3. Retrieved from http://www.implenientacion
science.com/content/1/1/1
& Policy,
1{\),
85-100.
Heath, C , & Heath, D. (2008). Made to stick: Why
Fixsen, D., Blase, K., Horner, R., & Sugai G. (2009).
Concept paper: Developing the capacity for scaling up thesome ideas survive and others die. New York, NY: Raneffective use of evidence-based programs in state depart- dom House.
ments of education. Retrieved from http://ea.
Hiss, R. G. (2004). Translational research—two phases
niusileadscape.org/docs/FINAL_PRODUCTS
of a continuum. In Erom clinical trials to community: the
/LearningCarousel/DevelopingCapaciry.pdf
science of translating diabetes and obesity research (pp. 11 Fixsen, D. L., Blase, K. A., Duda, M. A., Naoom, S. E,
ÔC Van Dyke, M. (2009). Wicked problems: From
demonstrations to transformation zones. Retrieved From
http://www.kl2.wa.us/RTl/Implementation/pubdocs/
WA_2WickedProblems_NIRN0409_HO.pdf
Eixsen,- D. L., Naoom, S. E, Blase, K. A., Friedman, R.
M., & Wallace, E (2005). Implementation research: A
synthesis of the literature. Tampa, EL: University of
South Florida, Louis de la Parte Elorida Mental Health
Institute, The National Implementation Research Network (EMHI Publication #231). Retrieved from
http://www.fpg.unc.edu/-nirn/resources/publications/
Monograph/pdf/Monograph_full.pdf
Gallagher, D. J. (2004). Educational research, philosophical orthodoxy, and unfulfilled promises: The
quandary of traditional research in U.S. special education. In G. Thomas & R. Pring (Eds.), Evidence-based
practices in education (pp. 119-132). Columbus, OH:
Open University Press.
Gersten, R., Euchs, L. S., Compton, D., Coyne, M.,
Greenwood, C , & Innocenti, M. S. (2005). Quality
indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71, 149-164.
Glasgow, R., Vogt, T., & Boles, S. (1999). Evaluating
the public health impact oF health promotion interventions: The RE AIM framework. American journal of
Public Health, 89, 1322-1327. http://dx.dol.org/10
.2105/AJPH.89.9.1322
Greenhalgh, T., Robert, G., MacEarlane, E, Bate, P.,
& Kyriakidou, O. (2004). Diffusion of innovations in
service organizations: Systematic review and recommendations. The Milbank Qiiarterly, 82, 581-629.
http://dx.doi.Org/10.llll/j.0887-378X.2004.00325.x
Greenwood, C. R., & Abbott, M. (2001). The research
to practice in special education. Teacher Education and
Special Education, 24, 276-289. http://dx.doi.org
/lO.l 177/088840640102400403
Exceptional Children
14). Bethesda, MD: National Institute oF Diabetes and
Digestive and Kidney Diseases. Retrieved From http://
www2.niddk.nih.gov/NR/rdonlyres/864EE73D-C8764B30-A0EB-l4E3911E2499/4589/Confpublication.pdF
Horner, R. H., Carr, E .G., Halle, J., McGee, G.,
Odom, S., & Wolery, M. (2005). The use of singlesubject research to identify evidence-based practice in
special education. Exceptional Children, 71, 165-179.
Homer, R. H., Sugai, G., & Anderson, C. M. (2010).
Examining the evidence base For school-wide positive
behavior support. Eocus on Exceptional Children, 42{8),
1-14.
Mclntosh, K., Filter, K. J., Bennett, J. L., Ryan, C , &
Sugai, G. (2010). Principles of sustainable prevention:
Designing scale-up of school-wide positive behavior
support to promote durable systems. Psychology in the
Schools, 47,5-2\.
Mclntosh, K., Horner, R. H., & Sugai, G. (2009). Sustainability of systems-level evidence-based practices in
schools: Current knowledge and future directions. In
W. Sailor, G. Dunlap, G. Sugai, & R. H. Horner
(Eds.), Handbook of positive behavior support (pp.
327-352). New York, NY: Springer.
Mostert, M. P , & Crockett, J. B. (1999-2000).
Reclaiming the history of special education For more
effective practice. Exceptionality, S{2), 133-143.
http://dx.d0i.0rg/l 0.12O7/S15327O35EX0802_4
National Autism Center. (2009). National standards
report. Randolph, MA: Author.
National Professional Development Center on Autism
Spectrum Disorders, (n.d.). Evidence-based practices.
Retrieved from
http://autisnipdc.fpg.unc.edu
/content/evidence-based-practices
National Secondary Transition Technical Assistance
Center, (n.d.). Evidence based practices. Retrieved From
http://www.nsttac.org/content/evidence-based-practices
Nelson, S. R., LeFfler, J. C , & Hansen, B. A. (2009).
Toward a research agenda for understanding and improv-
143
ing the use of research evidence. Portland, OR: North-Stokes, D. E. (1997). Pasteur's quadrant: Basic science
west Regional Educational Laboratory. Retrieved from and technological innovation. Washington DC: Brookings Institution Press.
http://educationnorthwest.org/webfm_send/311
Odom, S. L. (2005). Criteria for evidence-based prac- Straus, S. E., Tetroe, & Graham. (2009). Defining
tice in special education [Special issue]. Exceptional knowledge translation. Canadian Medication Association
Children, 71.
Journal, 181{3-4), 165-168. http://dx.doi.org/10.1503
Odom, S. L. (2009). The ties that bind: Evidence- /cmaj.081229
based practice, implementation science, and outcomes Sugai, G., & Horner, R. H. (2006). A promising apfor children. Topics in Early Childhood Special Educaproach for expanding and sustaining the implementation, 29, 53-61. http://dx.doi.org/10.1177/0271121 tion of school-wide positive behavior support. School
408329171
Psychology Review, 35, 245-259.
Odom, S. L., Brandinger, E., Gersten, R., Horner, R. Thomas, G., & Pring, R. (Eds.). (2004). Evidence-based
H., Thompson, B., & Harris, K. R. (2005). Research practice in education. Berkshire, England: Open Univerin special education: Scientific methods and evidence- sity Press.
based practices. Exceptional Children, 71, 137-148.
Torgesen, J. (2000). Individual differences in response
Odom, S. L., CoUet-Klingenberg, L., Rogers, S., & to early interventions in reading: The lingering probHatton, D. (2010). Evidence-based practices for chil- lem of treatment resisters. Learning Disabilities Research
dren and youth with autism spectrum disorders. Pre- & Practice, 15, 55-64. http://dx.doi.org/10.1207
venting School Failure, 54, 275-282. http://dx.doi. /SLDRP1501_6
org/10.1080/10459881003785506
Tseng, V. (2012). The uses of research in policy and
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas practice. Social Policy Report, 26{2). Retrieved from
in a general theory of planning. Policy Sciences, 4, http://www.srcd.org/index.php?option=com_content&
155-169.
task=view&ici=232ôiltemid=658
Rogers, E. M. (1962). Diffusion of innovations. New Weatherly, R., & Lipsky, M. (1977) Street-level buYork, NY Free Press.
reaucrats and institutional innovation: Implementing
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes,
R. B., & Richardson, W. S. (1996). Evidence based
medicine: What it is and what it isn't. British Medical
Journal, 312, 71-72. http://dx.doi.org/10.1136/bmj
.312.7023.71
special education reform. Harvard Educational Review,
47, 171-197.
What Works Clearinghouse. (201 1). Procedures and
standards handbook (Version 2.1). Retrieved from
http://ies.ed.gov/ncee/wwc/pdf/reference_resources
Sarason, S. B. (1993). The predictable failure of educa- /wwc_procedures_v 2_l_standards_handbook.pdf
tional reform: Can we change course before it's too late?
San Francisco, CA: Jossey-Bass.
Seethaler, P. M., & Fuchs L. S. (2005). A drop in the A B O U T T H E A U T H O R S
bucket: Randomized controlled trials testing reading
and math interventions. Learning Disabilities Research BRYAN G. COOK (Hawaii CEC), Professor of
& Practice, 20, 98-102. http://dx.doi.Org/10.llll/j Special Education, University of Hawaii, Hon.1540-5826.2005.00125.X
olulu. SAMUEL L. ODOM (North Carolina
Signal, L. N., Walton, M. D., Mhurchu, C. N., Mad- CEG), Director, Frank Porter Graham Child
dison, R., Bowers, S. G., Carter, K. N., . . . Pearce, J. Development Institute, University of North Car(2012). Tackling 'wicked' health promotion problems: olina at Chapel Hill.
A New Zealand case study. Health Promotion International, 31,
Slavin, R. E. (2008a). Evidence-based reform in education: Which evidence counts? Response to comments.
Educational Researcher, 37, 47—50. http://dx.doi.org
/10.3102/0013189X08315082
Address correspondence concerning this article to
Bryan G. Cook, University of Hawaii at Manoa,
College of Education, Department of Special
Education, 1776 University Avenue, Wist Hall
123, Honolulu, HI 96822 (bgcook@hawaii.edu).
Slavin, R. E (2008b). What works? Issues in synthesizing educational program evaluations. Educational
Researcher, 37, 5-14. http://dx.doi.org/10.3102/0013 Manuscript received April 2012; accepted July
2012.
189X08314117
144
Winter 2013
Copyright of Exceptional Children is the property of Council for Exceptional Children and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.
613592
research-article2015
SEDXXX10.1177/0022466915613592The Journal of Special EducationHudson et al.
Article
A Socio-Cultural Analysis of Practitioner
Perspectives on Implementation of
Evidence-Based Practice in Special
Education
The Journal of Special Education
2016, Vol. 50(1) 27–36
© Hammill Institute on Disabilities 2015
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0022466915613592
journalofspecialeducation.sagepub.com
Roxanne F. Hudson, PhD1, Carol A. Davis, EdD1, Grace Blum, MEd1,
Rosanne Greenway, MEd1, Jacob Hackett, MEd1, James Kidwell, MEd1,
Lisa Liberty, PhD1,2, Megan McCollow, PhD1,3, Yelena Patish, MEd1,
Jennifer Pierce, PhD1,4, Maggie Schulze, MEd1, Maya M. Smith, PhD1,
and Charles A. Peck, PhD1
Abstract
Despite the central role “evidence-based practice” (EBP) plays in special education agendas for both research and policy,
it is widely recognized that achieving implementation of EBPs remains an elusive goal. In an effort to better understand this
problem, we interviewed special education practitioners in four school districts, inquiring about the role evidence and EBP
played in their work. Our data suggest that practitioners’ responses to policies that press for increased use of EBP are
mediated by a variety of factors, including their interpretations of the EBP construct itself, as well as the organizational
conditions of their work, and their access to relevant knowledge and related tools to support implementation. We
interpret these findings in terms of their implications for understanding the problem of implementation through a more
contextual and ecological lens than has been reflected in much of the literature to date.
Keywords
evidence-based practices, implementation, special education practitioners
In the field of special education, a commitment to the logic
and ethics of using research to inform decisions about practice has been reflected in the field’s efforts to identify and
use evidence-based practices (EBPs) as a standard for the
profession (Council for Exceptional Children, 2014; Odom
et al., 2005). As in other fields, this focus has led inexorably
back to what some commentators have termed the “wicked”
problem of implementation (Cook & Odom, 2013).
Fixen and his colleagues (following Rittel & Webber,
1973) described wicked problems as those that are “difficult
to define and fight back when you try to solve them” (Fixen,
Blaze, Metz, & Van Dyke, 2013, p. 218). Indeed, the observation that “interests vested in the system-as-is suddenly
appear and typically deter attempts to change the system”
(Fixen et al., 2013, p. 218) has been made by ecologically
oriented observers of human behavior since time of Marx
(Bronfenbrenner, 1979; Lewin, 1951; Marx, 1888/1984).
One implication of this view, of course, is that the problem
of (non)implementation of EBP may be most usefully
viewed not simply as a “deficit” in the knowledge, skills, or
ideological commitments of practitioners but as a product
of the set of social, organizational, and material conditions
that operate in a given human service setting. In this article,
we draw on interviews conducted with special education
practitioners to investigate how these kinds of contextual
factors (and others) may affect the ways in which practitioners interpret and respond to contemporary press for implementation of EBP.
We are by no means the first to recognize the importance
of seeking practitioner perspectives in understanding the
challenges of implementing EBP in special education. For
example, Landrum, Cook, Tankersley, and Fitzgerald (2002)
surveyed 127 teachers (60 special educators, 67 general educators) to assess their views about the value of four sources
of information about practice: university coursework,
1
University of Washington, Seattle, USA
Northern Illinois University, DeKalb, USA
3
Central Michigan University, Mount Pleasant, USA
4
American Institutes for Research, Washington, DC, USA
2
Corresponding Author:
Roxanne F. Hudson, Area of Special Education, University of
Washington, P.O. Box 353600, Seattle, WA 99195, USA.
E-mail: rhudson@uw.edu
28
The Journal of Special Education 50(1)
research journals, teaching colleagues, and in-service/professional development workshops. Their data indicated that
research journals and university courses (presumably
sources of relatively reliable information about EBP) were
viewed as less useful, less trustworthy, and less accessible
than either information from colleagues or information
received via professional development. Similarly, Boardman,
Argüelles, Vaughn, Hughes, and Klingner (2005) reported
that teachers often expressed the belief that the extant
research was not relevant to the populations they served in
their classrooms, and reported relying on colleagues for recommendations about practice.
In a more recent study, Jones (2009) investigated the
views of 10 novice special educators regarding EBP. Based
on interview, classroom observation, and rating scale data,
Jones suggested that the novice teachers she studied fell
into three broad groups. “Definitive supporters” expressed
clear and positive views about the importance of research in
decisions about classroom practice. “Cautious consumers”
felt research could be useful, but often did not reflect characteristics and needs of their individual students. A third
group, “The Critics,” expressed skepticism about the value
of research for decisions about classroom practice.
Taken together, these studies (and others) provide a
rather robust picture of the tensions between research and
practice in special education. While significant variation
exists among special education practitioners in their views
about the value and relevance of research to their work in
the classroom, many express more confidence in the knowledge and expertise of local colleagues than in information
they might receive from university coursework and/or
researchers. This result is consistent with research from
other fields and suggests that much remains to be learned
about the conditions under which practitioners utilize
knowledge from research in decisions about practice
(Aarons & Palinkas, 2007; Glasgow, Lichtenstein, &
Marcus, 2003).
In our review of the special education research on this
topic, we noted that most researchers have framed their
analysis of practitioner perspectives related to implementation of EBP in essentially individualistic and personological
terms—placing teachers (and, in some cases, administrators) in the center of their analysis of the implementation
process. For example, as noted earlier, Jones (2009) parsed
individual teachers into groups such as “the Critics” and
“the Supporters.” Also focusing on individual practitioners,
Landrum et al. (2002) argued,
Only when we have confidence that teachers learn about
empirically sound practice in both their initial preparation and
ongoing professional development, and that their skills reflect
this training, can we predict that students with disabilities will
be afforded the most appropriate learning opportunities
available. (p. 48)
We do not entirely disagree with these conclusions, and
others like them that underscore the importance of personological variables (e.g., practitioner knowledge, prior training, attitudes) affecting implementation of EBP. But we
would also argue that in foregrounding characteristics of
individual practitioners as a focus of analysis, these studies
reflect a set of implicit assumptions about the nature of
practice and how it is constructed that narrows our view of
the problems of implementation, and the range of actions to
be considered in engaging those problems. In the present
study, we follow recent recommendations (Harn, Parisi, &
Stoolmiller, 2013; Klingner, Boardman, & McMaster, 2013;
Peck & McDonald, 2014) in undertaking a more holistic
and contextual approach to understanding how practitioner
perspectives on EBP are shaped by the conditions in which
they work.
Theoretical Framing
In conceptualizing “a more contextual” approach to understanding practitioner interpretation and implementation of
EBP, we drew on some of the precepts of sociocultural theory as a general framework for investigating ways in which
social and material conditions shape workplace learning
and practice (Billett, 2003; Engeström, 2001; Scribner,
1997; Vygotsky, 1978). Our choice of a sociocultural perspective was based on several of its key precepts that we
believed would be useful in understanding practitioner perspectives on implementation of EBP. First, sociocultural
theory foregrounds analysis of relationships between individual and collective dimensions of social practice—in this
case, the analysis of the transactions that take place between
individual practitioners and the organizations in which they
work (Engeström, 2001). Second, this view assumes that
human thought processes (including, of course, one’s views
about EBP) are shaped by the demands of the practical
activities in which people are regularly engaged. A third
assumption of this stream of sociocultural theory is that participation in social practice is affected by the affordances
and constraints of the conceptual and material tools available (e.g., the characteristics and representations of EBP
available in local school districts and other professional
resources; Falmagne, 1995; Leontev, 1975/1978; Scribner,
1997). Overall, the sociocultural perspective suggests the
value of undertaking a more focused analysis of the social
and organizational conditions in which decisions about
practice are made than has been reflected in much of the
extant research on the problem of implementation. We used
the following research questions to guide our inquiry:
Research Question 1: How do special education practitioners interpret the meaning of EBP in the context of
decisions they make about curriculum and instruction?
29
Hudson et al.
Table 1. School District Characteristics.
District
Students eligible for
Special education free or reduced-price
Enrollment enrollment (%)
meals (%)
A
B
C
D
18,123
20,659
17,973
8,920
9.70
13.60
13.60
12.40
22.10
35.10
66.90
26.00
Research Question 2: What contextual factors are associated with practitioner interpretations of the role EBP
can and should play in their decisions about instruction?
Method
We used a qualitative methodology (Merriam, 2009) to
investigate the perspectives—that is, the values, beliefs,
and attitudes—held by special education practitioners with
regard to their views about EBP, and the role research
played in their decisions about curriculum and instruction.
We elected this methodological approach because of the
hypothesis-generating, rather than hypothesis-testing, purposes of the study (Glaser & Strauss, 1967).
Participants
A total of 27 special education practitioners participated in
our study. We contacted directors of special education via
email and invited participation from four school districts in
the Seattle/Puget Sound area. Demographics for these districts are presented in Table 1.
Teacher participants were nominated by special education directors, who were asked to identify individuals they
believed would be interested in being interviewed for the
study. In each district, we requested nominations of teachers
working in three types of programs or settings: resource
rooms serving students with a wide range of disability
labels placed primarily in general education classrooms,
self-contained classes serving students with emotional/
behavioral disabilities (EBD), and self-contained classrooms serving students with low-incidence developmental
disabilities. Table 2 reports the number, working context,
and experience level of study participants in each of the districts in which we collected data.
Data Collection and Analysis
Interviews. The primary data source for our study consisted
of face-to-face interviews we conducted individually with
the 27 special educators who agreed to participate in the
study. We used semistructured interview protocols for each
of the four types of practitioners we interviewed: special
education directors, resource room teachers, EBD teachers,
and teachers of students with low-incidence developmental
disabilities. While the protocols for administrators and
teachers varied in some ways, both were structured to proceed from general, context-descriptive questions such as
“Tell me about the work you do,” to more focused questions
about daily practice (“Tell me about a typical day in your
classroom”). We asked each informant to define the term
EBP and tell us what it meant to them in terms of their decisions about curriculum and instruction. Interview protocols
also included a series of questions about district policies
related to EBP in both general and special education, and
how these affected the decisions our informants made in the
classroom. Interviews were generally between 45 min to an
hour in length. Interviews were audio-recorded and subsequently transcribed verbatim for analysis. Transcripts were
entered into a web-based platform for qualitative and
mixed-method data analysis (http://www.dedoose.com).
Data analysis. We used the standard procedures for inductive data analysis described by Charmaz (2002), Strauss and
Corbin (1997), and others. Thus, we began our analysis by
having each of the 11 members of our research team read
through the interview transcripts, identifying text segments
of potential relevance to our research questions. Each of
these segments was tagged using low inference descriptors,
such as “classroom assessment” or “progress monitoring.”
Members of the research team then met to discuss examples
of the text segments they had tagged, identifying and defining codes emerging from individual analysis to be formalized and used collectively. The remainder of the interviews
were then coded, followed by an additional round of team
meetings in which examples of each code were discussed,
with some codes combined, others modified or deleted
based on their perceived value relative to our research questions. A set of interpretive categories were developed
through this process which were used to aggregate coded
data segments and which became the basis for further analysis. These categories were then used as a basis for developing a series of data displays (Miles & Huberman, 1994)
organized by district and by each type of participant (i.e.,
resource room teachers, special education directors, etc.).
Team members met to discuss the implications of these
analyses and to develop a set of analytic memos which integrated the categorical data into larger and more interpretive
case summaries. These summaries were used to develop the
set of cross-case findings described below.
Results
Our findings suggest that personal characteristics (particularly values and beliefs about EBP), the features of organizations (particularly practitioner positionality within these
organizations), and access to relevant tools all affected the
30
The Journal of Special Education 50(1)
Table 2. Participant Characteristics.
Participants
District
A
B
C
D
Special education
director
EBD
teacher
Resource room
teacher
Self-contained
teacher
Number of
participants per
district
Median years in
position
1
1
1
1
1
1
2
2
2
2
2
2
2
3
2
2
6
7
7
7
7
10
6
6
Note. EBD = emotional/behavioral disabilities.
ways practitioners interpreted the relevance of the EBP to
decisions they made about practice. We interpreted these as
dimensions of practical activity that were inseparable and
mutually constitutive (Billett, 2006). As depicted in
Figure 1, our data suggest these factors operate in a highly
interdependent manner. We use this conceptual model to
understand both the points of the triangle and the interactions that take place between points as represented by the
lines of the triangle.
In the following sections, we will present findings both
related to the points of the triangle and the intersections of
elements. First, we use excerpts from our interviews to
illustrate how the practitioners we interviewed interpreted
the idea of EBP, the organizational contexts they worked in,
and the tools and resources available to them. Second, we
present findings that illuminate the connections and interactions between them.
People: Practitioner Definitions of EBP
We asked each of our informants how they defined EBP in
the context of their work in special education. The predominance of responses to this question reflected the notion that
EBP meant that “someone” had researched a specific program or practice and found it to be effective:
There’s obviously been research and studies so what I picture
in my mind is that they have a curriculum and they conduct a
study where they have kids who participate in the study and
then they probably have some pre- and posttest to see if they’ve
made gains.
I’d say evidence-based would be like, that it’s been tried in lots
of different settings, across you know lots of different
populations and there’s been demonstrated success using that
curriculum or whatever the thing is you’re talking about, you
know, the social skills sheet or something. So it’s used with lots
of people and over different settings.
We noticed that our participants typically defined EBP in
ways that emphasized its external origins, and its ostensive
function as a “prescription” for their practice, rather than as
a resource for their own decision making (Cook & Odom,
2013). In some cases, this interpretation was also congruent
with the stance taken by district administrators:
We have adults that want to continue to do what they’ve done
in the past. And it is not research-based nor if you look from a
data perspective has it been particularly effective and that’s not
going to happen and we say, “This is the research, this is what
you’re going to do.” (Special Education Director, District A)
This strong ideological commitment to use of EBP in the
classroom was shared by some teachers:
I believe that by using research-based instruction, and teaching
with fidelity, then you’re more likely to have an outcome that
is specific to the research, as long as we use the curriculum as
it’s designed. Um, I think it’s vital, I think it’s vital that we are
not pulling things out of a hat, that we are using. (Resource
Room Teacher, District A)
More often, however, we found that practitioner views
about research in general, and the value of EBP in decision
making about classroom practice in particular, were more
ambivalent. Perhaps the most widely shared concern about
EBP expressed by our informants had to do with the tensions they perceived between the “general case” and the
specifics of local context, including the special needs of the
children served in the schools and classrooms in which they
worked (Cook, Tankersley, Cook, & Landrum, 2008).
While the value of research and the relevance of EBP were
often acknowledged in abstract terms, both teachers and
administrators were quick to identify what they perceived
to be limitations in the relevance of research for local decision making and special populations:
. . .well what makes me question it—I’m always curious about
what the norm population is because it is talking about typically
developing kids and research-based practices that are used for
those types of kids. It’s very different for my kids. So when I’m
looking at an evidenced-based practice I want to be clear on
what evidence [is about] Gen Ed versus the Special Ed
population. (Self-Contained Classroom Teacher, District B)
31
Hudson et al.
Figure 1. Relationships between people, organizations, and tools. Adapted from McDiarmid & Peck (2012).
For many teachers, ambivalence about EBP included
particular tension about who makes decisions about the relevance of evidence to their classroom practice. These teachers often made reference to local perspectives as “forgotten”
or “overlooked” in decisions about practice:
. . . evidence-based is very important because you do need to
look at what you’re doing but there is just the day-to-day
knowledge that is overlooked in the evidence-based piece.
(Self-Contained Classroom Teacher, District B)
Most of the teachers and many administrators we interviewed appeared to locate the authority for making evidence-based decisions about curriculum and instruction
with the district central office or with “general education.”
For example, one director of special education reported,
“for our resource room students . . . we always start with the
Gen Ed and then if we have curriculum, if a part of that curriculum has a supported intervention component to it we
start with that.” Many teachers similarly viewed the locus
of decisions about curriculum and instruction as external to
their classrooms. As a Resource Room Teacher in District D
puts it,
They tell us what to teach and when to teach it. I mean, we
have a calendar and a pacing guide. We can’t, we really don’t
make the decisions too much. I would hope . . . that it’s
supported and making sure that students learn but I don’t
really know.
In some cases, these teachers expressed confidence that
the judgments of district curriculum decision makers were
grounded in appropriate evidence:
I kind of just trust that the district is providing me with
evidence-based stuff. So I’m trusting that the curriculum that
they’ve chosen and that my colleagues have done test work on
is really what they say it is. (EBD Teacher, District D)
However, in other cases, teachers expressed more skeptical views about the trustworthiness of the data district officials used to make decisions about curriculum:
. . . over the years we’ve had so many evidence-based, research
based and so many changes, that . . . if you just want my honest
[opinion] . . . I know that there’s data behind it, but if it’s
evidence based or research based, why are we always changing?
(EBD Teacher, District B)
32
The Journal of Special Education 50(1)
To summarize, similar to earlier studies (Boardman
et al., 2005; Jones, 2009; Landrum et al., 2002), we found
that the personal characteristics of practitioners—that is,
their experiences, values, beliefs, and attitudes—functioned
as a powerful filter through which they interpreted the
meaning of EBP and evaluated the relevance of this construct for their decision making about curriculum and
instruction. Practitioner definitions of EBP often reflected
the assumption that the locus of authority regarding EBP
lies outside the classroom, and the ostensive function of
EBP was to provide prescriptions for classroom practice.
In the following sections, we report findings related to
our second research question, describing ways in which
contextual features such as organization and access to tools
and resources may influence the way practitioners interpret
the value and relevance of EBP in their daily work.
Organizational Contexts of EBP
Our data suggest that our interviewees’ views about the
value and relevance of evidence in decision making about
practice were often part of a larger process of coping with
the organizational conditions of their work. Several specific
issues were salient in the interviews we conducted. One of
these, of course, had to do with district policies about evidence-based decision making (Honig, 2006). In some districts, special education administrators described strong
district commitments related to the use of research evidence
in decision making:
In this district, it’s [EBP] becoming really big. You don’t ever
hear them talk about any initiative without looking at the
research and forming some sort of committee to look at what
practices are out there and what does the research tell us about
it. And then identifying what are the things we’re after and how
well does this research say they support those specific things
we want to see happen. I would say that work has started, and
that is the lens that comes from Day One of anything we do.
(Special Education Administrator, District C)
However, strong district commitments to evidence-based
curriculum decisions in general education were sometimes
viewed as raising dilemmas for special education teachers.
A teacher of students with emotional and behavioral problems in District B described the problem this way:
. . . if (general education teachers) change their curriculum then
I need to follow it so my kids can be a part of it. Especially with
my kids being more part of the classroom. So you know 4 years
ago I was not doing the Math Expressions, and now I am doing
the Math Expressions and it’s hard because I’m trying to follow
the Gen Ed curriculum and there’s times where the one lesson
ends up being a 2- or 3-day lesson with some added worksheets
because they just aren’t getting the skill and, being a spiraling
curriculum, it goes pretty fast sometimes too. (Self-Contained
Teacher, District B)
While the tensions between curriculum models in general education and special education (with each claiming its
own evidentiary warrants) were problematic for many of
the resource room teachers we interviewed, these dilemmas
were less salient to teachers of children in self-contained
classrooms, including those serving students with EBD and
those serving students with low-incidence disabilities.
Instead, the challenges that these teachers described had to
do with isolation and disconnection from colleagues serving students like theirs. A Self-Contained Classroom
Teacher, District B, said, “Well, this job is very isolating.
I’m the only one that does it in this building . . . so uh I’m
kind of alone in the decisions I make.” Another SelfContained Classroom Teacher, District D, said,
Sometimes it makes me feel like it’s less than professional . . .
I don’t know, I just sometimes wish that, I feel like there’s not
always a lot of oversight as far as what am I doing. Is there a
reason behind what I’m doing? And did it work? I wish there
was more.
In cases where teachers felt isolated and disconnected
from district colleagues, they often reported relying on
other self-contained classroom teachers for support and
consultation, rather than resources in their district or from
the research literature:
When you’re in an EBD class you can get pretty isolated . . .
but the other beauty of being in an EBD room is you have
other adults in the room that you can talk to, or they can have
other ideas, or you can call other teachers. (EBD Teacher,
District B)
Resources and Tools Related to EBP
I try to make our classroom setting as much like a general Ed
classroom as I can, because the goal is to give them strategies to
work with behaviors so that they can function in Gen Ed
classrooms. Which is a challenge, because they were in Gen Ed
classrooms before they came here, so something wasn’t working.
District commitments to use of EBPs in the classroom were
in many cases accompanied by allocation of both material
and conceptual resources. The resources and supports most
often cited by both administrators and teachers in this connection were focused on curriculum materials:
Some special education teachers described being caught
between curriculum decisions made in general education
and practices they saw as more beneficial for their students
with disabilities:
Susan, who is our Special Ed curriculum developer, and
Louisa, who’s our curriculum facilitator . . . they recognize that
we’ve shifted practice to a really research-based practice . . .
We never really did this [before] we just bought books and
33
Hudson et al.
stuff. And I said, “Well, I don’t operate that way. We’re going
to shift practice and I’m going to help you” . . . and she has
actually been very, very successful at reaching out and
capturing the attention of folks, authors that publish . . . and
some other materials and some other research and then digging
deep. (Special Education Director, District A)
I think there’s something really powerful about having that
scope and sequence and that repetition that gradually builds on
itself. Yeah so instead of me trying to create it as I go, having
that research-based program, and of course I’m going to see if
it’s not working, then I’m flexible to change it, but I’m going to
have a good base to at least start with. (Resource Room
Teacher, District B)
While district curriculum resources (which were often
assumed to be evidence-based) were important tools for
many resource room teachers we interviewed, both teachers
and administrators expressed frustration about what they
viewed as a paucity of evidence-based curriculum and
instructional resources for students in self-contained programs, particularly those serving students with low-incidence disabilities:
There’s not a lot of curriculum out there for the self-contained
classrooms. So we do have some, some for our more mild selfcontained programs, specific reading, writing, math
curriculums. But we also made our own library we call it “The
Structured Autism Library” . . . it’s a library of materials we
have online that we’ve developed ’cause you can’t really pull
anything off the shelves for those kinds of kids. (Special
Education Director, District D)
. . . in self-contained settings in Ocean View, I think that I am
expected to . . . sort of use Gen Ed assessments, but in
Kindergarten they already test out of those, they already fail so
miserably at those. I am then the expert because there’s no
assessments that assess these kids’ growth so I make my own
assessments. (Self-Contained Classroom Teacher, District D)
In the context of these perceptions about the lack of relevant
evidence-based resources, we found that only a few teachers undertook individual efforts to locate and use available
research. Those who did often encountered considerable
difficulty in locating relevant research resources:
If I’m implementing something I’m not so good at then I’ll go
do some reading on it. The Autism modules online are helpful.
I’d say mostly every time I try and type something in for a
problem I’m having without knowing if there is any research
on it, I can never find it, hardly ever. (Self-Contained Classroom
Teacher, District C)
More often, teachers adopted a “progress monitoring”
approach to evidence and decision making: “We take what
we learn about the kids and we change how we instruct
them, so that’s kind of what we do every day . . . based on
each kid and their individual performance” (Self-Contained
Classroom Teacher, District B).
After our examination of each separate element of the
system, we analyzed the interactions between elements to
understand the transactional, contextual nature of practitioners’ understanding of EBPs.
A Holistic View: Relationships Between People,
Tools, and Organizations
Our data suggest that the positions practitioners occupied
within their schools and districts had considerable influence
on their access to useful resources and tools related to EBP.
The nature of the tools available to them, in turn, affected
practitioner experiences with and beliefs about the value
and relevance of EBPs in the context of decisions they were
making. Our findings are summarized below in terms of
some specific hypotheses regarding the ways in which these
dimensions of practice are related to one another.
Hypothesis 1: Practitioner beliefs about the value and
relevance of EBP are shaped by the affordances and constraints of the tools and resources they use for decision
making.
Similar to other researchers (e.g., Boardman et al., 2005;
Landrum et al., 2002), we found some skepticism about the
practical relevance of research for local decision making
across all of the practitioner groups we interviewed.
However, we also noted that this view was most prevalent
among self-contained classroom teachers and particularly
among teachers of students with low-incidence disabilities.
In many cases, both teachers and administrators working
with students with low-incidence disabilities expressed the
belief that research on “kids like ours” did not exist. For
example, a Self-Contained Classroom Teacher in District B
explained her stance about research and practice this way:
Well it’s just that there’s not a lot out there for me and maybe
it’s hard to find. I feel like through my multiple programs, I’ve
looked at a lot and there’s just not a lot out there. And I know
why—this is a small fraction of the population.
In some cases, teachers in our study described themselves
as having access to curriculum tools they considered to be
evidence-based for some populations, but inappropriate for
the specific students they served.
Hypothesis 2: Practitioner access to relevant tools and
resources is affected by the position they occupy within
the school and the district.
The self-contained classroom teachers we interviewed often
described themselves as being extremely isolated, and as
34
The Journal of Special Education 50(1)
having relatively little access to tools and resources (research
studies, evidence-based curriculum, and relevant evidencebased professional development) they viewed as useful for
the students they taught. We found that teachers who worked
in positions that were relatively close to general education
often had access to more tools and resources (evaluation
instruments, curriculum guides, professional development)
related to EBP, but were also more likely to encounter tensions between mandates from general education, and practices they viewed as appropriate for students with special
education needs. The effects of organizational position were
also mediated by district policies and practices around collaboration. For example, one district had strategically developed specific organizational policies and practices to support
collaboration among their self-contained classroom teachers.
One of the teachers in this district commented on the value of
these practices as a support for implementation of EBPs:
I definitely believe in the PLC (professional learning
community) model and just sharing ideas and having, taking on
new strategies, and monitoring our own use of them. I think
that is kind of the future. I think you’re going to get more
buy-in, than a sit and get PD session on evidence-based
practice. I think its been proven actually, that you just sit and
get it and then you don’t have to use it, no one checks in with
you to make sure you are using it. So making groups of people
accountable (to each other) makes a ton of sense to me. (SelfContained Classroom Teacher, District B)
More often, however, district policies, resources, and supports related to EBP were reported to be focused on curriculum and instructional programs for students with
high-incidence disabilities.
Teachers in self-contained classrooms often found these
tools and resources, and the idea of EBP to be of little direct
value to their daily work.
Hypothesis 3: How practitioners define EBP affects how
they interpret the value and relevance of tools and other
organizational resources available to them related to EBP.
As we explain further later on, most of the teachers and
administrators we interviewed defined EBP primarily in
terms of prescriptions for practice that were made by external authorities. Many of these informants were also those
who expressed ambivalence, and often outright skepticism,
about the value of EBPs for their work. In contrast to this
pattern, a few of the practitioners we talked with appeared
to have a more nuanced way of defining EBP. In these cases,
EBP was defined less as a prescription for practice than as a
resource for decisions they would make in the classroom:
I think it would be wonderful to be informed of that research,
and the best teacher would have all that information, and be
able to look at the kid, and provide them an opportunity with a
program that is research-based and validated and everything,
and look at how the child is responding to the program, give it
a little bit of time, make sure you’re delivering it with
authenticity and the way it’s supposed to be delivered, you
know, give it 3–4 weeks, and if it’s not working you need to
find something else. (Resource Room Teacher, District B)
Discussion
Over the last decade, the notion of EBP has become one of
the most influential policy constructs in the field of special
education. In this study, we sought to improve our understanding of the ways practitioners define the idea of EBP and
interpret its relevance in the contexts of their daily practice.
In addition, we hoped to extend previous research by learning more about some of the contextual factors that might
influence practitioner views about EBP. To investigate these
two issues, we conducted interviews with special education
professionals in four local school districts, including those
occupying positions as directors of special education,
resource room teachers, self-contained classroom teachers
of students with emotional and behavioral disabilities, and
teachers of students with low-incidence developmental disabilities. Our analysis of these data was guided by some general precepts drawn from sociocultural theory (Chaiklin &
Lave, 1993; Vygotsky, 1978), particularly the idea that social
practice can be understood as a process in which individuals
are continually negotiating ways of participating in collective activity (Nicolini, Gherardi, & Yanow, 2003).
We found that the practitioners we interviewed often
defined EBP in ways that externalized the locus of authority
for what constituted relevant evidence for practice as the
results of “studies someone had done.” Ironically, this view
also appeared to be the focus of considerable tension and concern, as many of the practitioners we interviewed expressed
the idea that research did not adequately reflect the characteristics of the students they served nor the contexts of the decisions
about practice they were charged with making on a daily basis.
Our findings extend previous research (e.g., Boardman
et al., 2005; Jones, 2009) by identifying some specific
hypotheses regarding the ways practitioner interpretations
and responses to contemporary pressures for use of EBPs
may be shaped by the organizational contexts of their work,
as well as the professional resources (research, curriculum
tools, training) that are available to them. We consider some
implications of these general ideas below.
Context, Participation, and Practice
In the interviews we conducted, we were continually
impressed with the ways in which practitioners’ ideas and
decisions about their ways of doing things were shaped by the
resources available to them, the policies and practices of the
schools and districts in which they worked, and the values and
norms of their colleagues. The view which emerges from
these interviews suggests many ways in which decisions
35
Hudson et al.
about practice—that is, decisions about how to do things in
the classroom—are distributed across multiple participants
and multiple settings. We believe this perspective is in tension
with much of the literature on implementation of EBP, which
tends to reflect a much more individualistic view of the problem, with an accompanying set of intervention strategies
focused primarily on individual practice (e.g., Jones, 2009;
Landrum et al., 2002). Just to be clear, we are not suggesting
that researchers are naïve about what are often conceptualized
as “contextual variables” affecting individual practice.
The hypothesis is actually more provocative—that is,
that practice itself may be understood as essentially social,
relational, and distributive in nature (Edwards, 2012). Such
a view does not suggest that changing individual behavior is
unimportant, much less that the “real” issues reside only in
the “system” (Fixen et al., 2013). Our data suggest, rather,
that our understanding of the problems of implementing
EBP may be improved by more focused investigation of the
ways in which individual and collective dimensions of professional practice create and sustain one another (Nicolini
et al., 2003; Peck, Gallucci, Sloan, & Lippincott, 2009).
A recurring theme in the interview data we collected had
to do with the ways that practitioner definitions of EBP, and
their views of the relevance of this construct for their work,
were confounded with their concerns about the locus of
power and control over decision making about classroom
practice. In some cases, these concerns appeared to be more
salient to our informants than those about the substance of
the decisions themselves, and reflected a sense of practitioner alienation from researchers (and, in some instances,
administrators). Our interpretation of these concerns was
that they represented fundamental and perennial tensions
about power and authority over decisions in the classroom.
Issues of power and control are by no means limited to the
problems of implementation of EBP, but are, rather, one of
the most pervasive challenges to any kind of systemic
change effort in educational organizations (van den Berg,
2002). We hasten to point out that we do not interpret this
finding in personological terms, that is, we do not see these
practitioners as simply “burned out” or “resistant.” Rather,
we view their responses to be, at least in part, a reflection of
the tensions they experience between the organizational
contexts of their work, the tools and resources available to
them, and the demands of their work in the classroom.
These hypotheses appear worthy of further investigation.
that inevitably accompany decisions about practice (Harn
et al., 2013). As we noted above, this sample of practitioners
more often interpreted research as a prescription for practice,
rather than as a source of hypotheses about what might work
with their students (Cook & Odom, 2013). This “top-down”
view of EBP appeared to be distributed equally across teachers and administrators in our study. As we have noted, this
interpretation of the EBP construct was often associated with
skeptical views about the value of EBPs for decisions about
classroom practice. This finding, if replicated through additional research with larger and more diverse samples, may be
an important focus for professional development.
Neither the teachers nor the administrators we interviewed appeared well prepared to access tools that were not
immediately available in the districts in which they worked.
For example, very few of the teachers and administrators
we interviewed seemed to be aware of the tremendous
changes in online accessibility of library resources that has
taken place in the past few years, as well as the explosion of
online training resources related to EBPs in special education (e.g., http://iris.peabody.vanderbilt.edu/index.html).
While the restricted sample of practitioners we interviewed
mandates caution in generalizing from these findings, these
data do suggest the value of undertaking a broader investigation of the kinds of preparation and support practitioners
receive via both preservice and professional development
activities related to online access to tools for evidencebased decision making about curriculum and instruction.
We conclude by noting the congruence of our emerging
hypotheses regarding the socially negotiated and distributed
nature of practice with findings from other educational settings (Ingram, Seashore Louis, & Schroeder, 2004; Peck &
McDonald, 2014; Spillane, Halverson, & Diamond, 2001)
and other fields of practice (Nicolini et al., 2003). These
studies, and our present findings, suggest that achieving
implementation of EBP may be viewed most productively
not simply as a problem of individuals acquiring new
knowledge, skills, and dispositions but rather as a broader
social process in which individual practitioners participate
in the collective negotiation of decisions about practice in
the context of the tools available to them and the organizational conditions in which they work.
Acknowledgments
We are thankful to the teachers and principals who generously
shared their thinking with us.
Professional Development and Implementation
of EBP
Authors’ Note
The interviews we conducted raise several questions about
training and professional development related to implementation of EBP. First, we found that most of the 27 teachers
and administrators we interviewed were ill-prepared to
understand and respond to the essential epistemological
tensions between the “general case” and the “local case”
Declaration of Conflicting Interests
The content is solely the responsibility of the authors and does not
necessarily represent the official views of the Office of Special
Education Research.
The author(s) declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
36
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article:
Research reported here was supported by Award Number
H325D100072A.
References
Aarons, G. A., & Palinkas, L. A. (2007). Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and
Mental Health Services Research, 34, 411–419.
Billett, S. (2003). Sociogeneses, activity and ontogeny. Culture &
Psychology, 9, 133–169.
Billett, S. (2006). Constituting the workplace curriculum. Journal
of Curriculum Studies, 38, 31–48.
Boardman, A. G., Argüelles, M. E., Vaughn, S., Hughes, M. T.,
& Klingner, J. (2005). Special education teachers’ views of
research-based practices. The Journal of Special Education,
39, 168–180.
Bronfenbrenner, U. (1979). Contexts of child rearing: Problems
and prospects. American Psychologist, 34, 844–850.
Chaiklin, S., & Lave, J. (1993). Understanding practice:
Perspectives on activity and context. Cambridge, UK:
Cambridge University Press.
Charmaz, K. (2002). Qualitative interviewing and grounded theory
analysis. In J. F. Gubrium & J. A. Holstein (Eds.), Handbook
of interview research: Context & method (pp. 675–694).
London, England: SAGE.
Cook, B. G., & Odom, S. L. (2013). Evidence-based practices
and implementation science in special education. Exceptional
Children, 79, 135–144.
Cook, B. G., Tankersley, M., Cook, L., & Landrum, T. J. (2008).
Evidence-based practices in special education: Some practical
considerations. Intervention in School and Clinic, 44, 69–75.
Council for Exceptional Children. (2014). Standards for evidencebased practices in special education. Reston, VA: Author.
Edwards, A. (2012). The role of common knowledge in achieving
collaboration across practices. Learning, Culture and Social
Interaction, 1, 22–32.
Engeström, Y. (2001). Expansive learning at work: Toward an
activity theoretical reconceptualization. Journal of Education
and Work, 14, 133–156.
Falmagne, R. (1995). The abstract and the concrete. In L. Martin,
K. Nelson, & E. Tobach (Eds.), Sociocultural psychology:
Theory and practice of doing and knowing (pp. 205–228).
Cambridge, UK: Cambridge University Press.
Fixen, D., Blaze, K., Metz, A., & Van Dyke, M. (2013). Statewide
implementation of evidence-based programs. Exceptional
Children, 79, 213–230.
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory:
Strategies for qualitative research. Chicago, IL: Aldine Press.
Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why
don’t we see more translation of health promotion research to
practice? Rethinking the efficacy-to-effectiveness transition.
American Journal of Public Health, 93, 1261–1267.
Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and fit: What do we really know about
fidelity of implementation in schools? Exceptional Children,
79, 181–193.
The Journal of Special Education 50(1)
Honig, M. (2006). Complexity and policy implementation. In M.
Honig (Ed.), New directions in education policy implementation: Confronting complexity (pp. 1–25). Albany: State
University of New York Press.
Ingram, D., Seashore Louis, K., & Schroeder, R. (2004). Accountability
policies and teacher decision making: Barriers to the use of data
to improve practice. Teachers College Record, 106, 1258–1287.
Jones, M. L. (2009). A study of novice special educators’ views
of evidence-based practices. Teacher Education and Special
Education, 32, 101–120.
Klingner, J. K., Boardman, A. G., & McMaster, K. L. (2013).
What does it take to scale up and sustain evidence-based practices? Exceptional Children, 79, 195–211.
Landrum, T. J., Cook, B. G., Tankersley, M., & Fitzgerald, S.
(2002). Teacher perceptions of the trustworthiness, usability, and accessibility of information from different sources.
Remedial and Special Education, 23, 42–48.
Leontev, A. N. (1978). Activity, consciousness, and personality
(M. J. Hall, Trans.). Englewood Cliffs, NJ: Prentice Hall.
(Original work published 1975)
Lewin, K. (1951). Field theory in social science: Selected theoretical papers. Oxford, UK: Harpers.
Marx, K. (1984). Thesis on Feuerbach. In K. Marx & F. Engels
(Eds.), The individual and society (pp. 5–7). Moscow, Russia:
Progress. (Original work published 1888)
McDiarmid, G. W. & Peck, C. (2012, April). Understanding
change in teacher education programs. Paper presented at
the Annual Meeting of the American Educational Research
Association, Vancouver, B.C., Canada.
Merriam, S. B. (2009). Qualitative research: A guide to design
and implementation. Hoboken, NJ: John Wiley.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: SAGE.
Nicolini, D., Gherardi, S., & Yanow, D. (2003). Knowing in organizations: A practice-based approach. Armonk, NY: M.E. Sharpe.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H.,
Thompson, B., & Harris, K. R. (2005). Research in special
education: Scientific methods and evidence-based practices.
Exceptional Children, 71, 137–148.
Peck, C. A., Gallucci, C., Sloan, T., & Lippincott, A. (2009).
Organizational learning and program renewal in teacher education: A socio-cultural theory of learning, innovation and
change. Educational Research Review, 4, 16–25.
Peck, C. A., & McDonald, M. (2014). What is a culture of evidence? How do you get one? And . . . should you want one?
Teachers College Press, 116(4), 1–27.
Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general
theory of planning. Policy Sciences, 4, 155–169.
Scribner, S. (1997). Mind and social practice. Selected writings of
Sylvia Scribner. Cambridge, UK: Cambridge University Press.
Spillane, J. P., Halverson, R., & Diamond, J. B. (2001).
Investigating school leadership practice: A distributed perspective. Educational Researcher, 30, 23–28.
Strauss, A., & Corbin, J. M. (1997). Grounded theory in practice.
Thousand Oaks, CA: SAGE.
van den Berg, R. (2002). Teachers’ meanings regarding educational practice. Review of Educational Research, 72, 577–625.
Vygotsky, L. S. (1978). Mind in society: The development of
higher psychological processes. Cambridge, UK: Harvard
University Press.
C
Psychology in the Schools, Vol. 52(2), 2015
View this article online at wileyonlinelibrary.com/journal/pits
2014 Wiley Periodicals, Inc.
DOI: 10.1002/pits.21815
TRAINING TEACHERS TO USE EVIDENCE-BASED PRACTICES FOR AUTISM:
EXAMINING PROCEDURAL IMPLEMENTATION FIDELITY
AUBYN C. STAHMER AND SARAH RIETH
Rady Children’s Hospital, San Diego and University of California, San Diego
EMBER LEE
Rady Children’s Hospital, San Diego
ERICA M. REISINGER AND DAVID S. MANDELL
The Children’s Hospital of Philadelphia Center for Autism Research
JAMES E. CONNELL
AJ Drexel Autism Institute
The purpose of this study was to examine the extent to which public school teachers implemented
evidence-based interventions for students with autism in the way these practices were designed.
Evidence-based practices for students with autism are rarely incorporated into community settings,
and little is known about the quality of implementation. An indicator of intervention quality is
procedural implementation fidelity (the degree to which a treatment is implemented as prescribed).
Procedural fidelity likely affects student outcomes. This project examined procedural implementation fidelity of three evidence-based practices used in a randomized trial of a comprehensive
program for students with autism in partnership with a large, urban school district. Results indicate
that teachers in public school special education classrooms can learn to implement evidence-based
strategies; however, they require extensive training, coaching, and time to reach and maintain
moderate procedural implementation fidelity. Procedural fidelity over time and across intervention
C 2014 Wiley Periodicals, Inc.
strategies is examined.
Special education enrollment for children with autism in the United States has quadrupled
since 2000 (Scull & Winkler, 2011), and schools struggle to provide adequate programming to these
students. A growing number of interventions for children with autism have been proven efficacious
in university-based research settings, but much less attention has been given to practical issues
of implementing these programs in the classroom, where most children with autism receive the
majority of their care (Sindelar, Brownell, & Billingsley, 2010). In general, evidence-based practices
for children with autism are rarely incorporated into community settings (Stahmer & Ingersoll, 2004).
Teachers in public schools report receiving inadequate training and rate their personal efficacy in
working with children with autism as low (Jennett, Harris, & Mesibov, 2003). Training public
educators to provide evidence-based practices to children with autism is a central issue facing the
field (Simpson, de Boer-Ott, & Smith-Myles, 2003).
One major challenge to implementing evidence-based practices for children with autism in
community settings is the complexity of these practices. Strategies based on the principles of
applied behavior analysis have the strongest evidence to support their use (National Standards
This research was funded by grants from the National Institute of Mental Health (5R01MH083717) and the
Institute of Education Sciences (R324A080195). We thank the School District of Philadelphia and its teachers and
families for their collaboration and support. Additionally, Dr. Stahmer is an investigator with the Implementation
Research Institute at the George Warren Brown School of Social Work, Washington University, St. Louis, through an
award from the National Institute of Mental Health (R25MH080916).
Correspondence to: Aubyn C. Stahmer, Child and Adolescent Services Research Center & Autism Discovery
Institute, Rady Children’s Hospital, San Diego, 3020 Children’s Way, MC5033, San Diego, CA 92123. E-mail:
astahmer@ucsd.edu
181
182
Stahmer et al.
Project, 2009). These practices vary greatly in structure and difficulty. Some strategies, such as
discrete trial teaching (DTT; Leaf & McEachin, 1999; Lovaas, 1987), are highly structured and
occur in one-on-one settings, whereas others are naturalistic, can be conducted individually or
during daily activities, and tend to be more complex to implement (e.g., incidental teaching; Fenske,
Krantz, & McClannahan, 2001; or pivotal response training [PRT]; Koegel et al., 1989). There are also
classroom-wide strategies and structures based on applied behavior analysis, such as teaching within
functional routines (FR; Brown, Evans, Weed, & Owen, 1987; Cooper, Heron, & Heward, 1987;
Marcus, Schopler, & Lord, 2000; McClannahan & Krantz, 1999). Although all of these evidencebase...
Purchase answer to see full
attachment