- Look at two of my classmate's posts. I need you to respond to each one
separately. Don't write about how good their posts or how bad. All you need to
do is to choose one point of the post and explore it a little bit with one source
support for each response. In the attachment, you will find all the classmates
posts.
- APA Style.
- Reading:
Reilly, M., &Markenson, D. S. (2010). Health Care Emergency Management: Principles and Practice
•
•
Chapter 6: Introduction to Exercise Design and Evaluation
Chapter 8: Education and Training Emergency management principles and practices for
healthcare systems (2006).
Kaji, A, Langford, V, Lewis,R (2008) Assessing Hospital Disaster Preparedness: A Comparison of an
On-Site Survey, Directly Observed Drill Performance, and Video Analysis of Teamwork, Annals of
Emergency Medicine V52, No3, 195-201
•
Assessing Hospital Disaster Preparedness.pdf
- Discussion
Board Question?
* What are some of the biggest challenges in developing and implementing a preparedness
exercise in a hospital setting?
* What differences/similarities exist between hospital and municipal preparedness
exercises?
Student 1 post:
Challenges Facing the Healthcare System During Emergencies
The healthcare system plays an important role in times of disasters. Through
proper planning, training, command, and coordination the system should always be
responsive. However, the situation on the ground is different. Most hospitals are
caught up in times of disasters which end up making the already bad situation
worse. The main challenge lies with developing and implementing the required
preparedness action.
Among the greatest challenges in hospice organizations is a surge in capacity.
Most hospitals in the densely populated areas operate at or near full capacity.
Consequently during disasters, the hospitals are seriously limited on their
expansion capability (Kaji & Lewis, 2004 ). Some of the surveys done, for
example, have found that availability of beds, ventilators, isolation beds, and drugs
are insufficient in times of large scale disaster.
Another challenge is the lack of a good communication network. There is a need to
put more emphasis on the importance of a good flow and channel of
communication. It’s a fact that communication assists in ensuring victims are
directed to the most appropriate facilities. Besides, hospitals have a prior alert on
the number of victims to expect and the type of response is required. According to
Niska and Burt (2005), very few hospitals have a provision for their bioterrorism
response plan at 72% . Hence, it can be stated that the communication systems for
most hospitals are considerably weak.
Similarities Between the Municipal and Hospital Preparedness
The teams tasked with the tackling emergencies in cases of a disaster are the
municipal and the healthcare workers. Both teams share a lot when it comes to
disasters management. The municipal team for instance provides emergency plan
templates training and exercises development and facilitation of the same. In
addition, the municipal enhance information sharing with the different hospitals;
creating situational awareness on the primary care needs before and after a disaster.
In conclusion, the mutual partnership between healthcare providers and the local
authorities is imperative and more resources should be channeled to enhance this
cooperation. The management of hospitals around the country should come up
with realistic policies that can be implemented to make sure disasters and
emergencies are averted in the shortest possible time.
References
Kaji, A. H., Langford, V., & Lewis, R. J. (2008). Assessing hospital disaster preparedness:
A comparison of an on-site survey, directly observed drill performance, and video
analysis of teamwork. Annals of emergency medicine, 52(3), 195-201.doi:
10.1016/j.annemergmed.2007.10.026.
Niska, R. W., & Burt, C. W. (2005). Bioterrorism and mass casualty preparedness
in hospitals: United States, 2003. Emmitsburg, MD: National Emergency Training
Center.
Rand Corporation. (2004). RAND study shows compensation for 9/11 terror
attacks tops $38 billion. Businesses Receive Biggest Share. Retrieved from
http://www.rand.org/news/press.04/11.08b.html
Student 2 post:
Hospital preparedness especially when it comes to disasters is a common requirement
that should be taken seriously. Majority of hospitals in the urban and rural sectors do not use
disaster preparation techniques in managing disasters (Beitsch et al,2006). This usually
results to most of them failing when a disaster takes place. It is essential that hospitals should
always be prepared in handling these situations since they handle the lives of people. There
are many challenges that make it difficult for hospitals to either develop or implement their
preparedness plans; this paper will discuss some of them.
Budgets are some of the leading problems that affect planning an implementation of
preparedness plans. The hospital sector will always require enough financial allocations for
buying emergency types of equipment that will help in rescuing people from arising dangers.
In some cases, the hospitals may be forced to spend too much on patients' recovery. This,
therefore, means that there should always be money that is available to take care of this.
Some finances can be used for other preparations and the training of staff.
Some hospitals face problems with the administration; The administrators do not
consider emergencies when they are planning for or allocating the available resources. This,
therefore, leads to the misuse of resources that would have instead been used for preparing
for disasters. Training of staff is a requirement that should be accomplished in every hospital
so that they are able to deal with emergencies (Leinhos et al, 2014). Emergencies in the
health-care sector cannot be handled by just anyone, special knowledge is required so as not
to cause more harm. There is also a lack of enough guidelines to provide direction to the
staff; this can help in avoiding switching of roles.
Communication is another important factor that is not taken seriously in health care
preparedness. Through good communication, the nurses and other staff are informed of their
roles beforehand so they know what to do, this avoids any sort of confusion. Communication
is important in ensuring there is coordination. Coordination is also another problem that
affects the development and implementation of preparedness plans in the hospital setting.
Coordination between different sectors such as wards and the administration help in ensuring
a hospital is fully prepared.
The above-explained factors help in showing that indeed there is a difference between
the municipal preparedness and the hospital preparedness for an emergency. The hospital, for
instance, requires qualified staff that are fully trained on how to handle emergency situations
(CAUDLE, 2009). The hospital setting is more important hence requires more attention. The
importance is derived from its ability to also treat the affected people from any emergency
situation. It is also important to remember that both sectors are similar in a way; they both
deal with emergency and require revenue. This, therefore, means that financial allocation
affects all of them.
In summary, emergency services are always important since they ensure any disaster
or sudden occurrence is controlled properly. It is clear that capital is important in planning for
control of such situations. Training of the staff is also another factor that helps in the
efficiency of the operation. The management in hospitals and the municipal sectors are also
expected to be qualified so that they can do their work as required and for proper use of
resources. They should also ensure that they work with all sectors to enable the process of
emergency control. Through communication, every staff will be made aware of the situation
whenever it comes about. The hospitals should also contain the emergency department
section with the teams who work together in controlling disasters. This can help in
specialization in this sector so that it is remembered when funds are being allocated.
References
Beitsch, L., Kodolikar, S., Stephens, T., Shodell, D., Clawson, A., Menachemi, N., & Brooks,
R. (2006). A State-Based Analysis of Public Health Preparedness Programs in
the United States. Public Health Reports (1974-), 121(6), 737-745.
CAUDLE, S. (2009). AN OPTION FOR HOMELAND SECURITY PREPAREDNESS
REQUIREMENTS: Consensus Management System Standards. Public
Performance & Management Review,33(1), 141-155.
Leinhos, M., Qari, S., & Williams-Johnson, M. (2014). Preparedness and Emergency
Response Research Centers: Using a Public Health Systems Approach to
Improve All-Hazards Preparedness and Response. Public Health Reports
(1974-), 129, 8-18.
DISASTER MEDICINE/ORIGINAL RESEARCH
Assessing Hospital Disaster Preparedness: A Comparison of an
On-Site Survey, Directly Observed Drill Performance, and Video
Analysis of Teamwork
Amy H. Kaji, MD, MPH
Vinette Langford, RN, MSN
Roger J. Lewis, MD, PhD
From the Department of Emergency Medicine, Harbor–UCLA Medical Center, Los Angeles, CA (Kaji,
Lewis); David Geffen School of Medicine at UCLA, Torrance, CA (Kaji, Lewis); Los Angeles Biomedical
Research Institute, Torrance, CA (Kaji, Lewis); The South Bay Disaster Resource Center at
Harbor–UCLA Medical Center, Los Angeles, CA (Kaji); and MedTeams and Healthcare Programs Training
Development and Implementation, Dynamics Research Corporation, Andover, MA (Langford).
Study objective: There is currently no validated method for assessing hospital disaster preparedness.
We determine the degree of correlation between the results of 3 methods for assessing hospital disaster
preparedness: administration of an on-site survey, drill observation using a structured evaluation tool, and
video analysis of team performance in the hospital incident command center.
Methods: This was a prospective, observational study conducted during a regional disaster drill,
comparing the results from an on-site survey, a structured disaster drill evaluation tool, and a video
analysis of teamwork, performed at 6 911-receiving hospitals in Los Angeles County, CA. The on-site
survey was conducted separately from the drill and assessed hospital disaster plan structure, vendor
agreements, modes of communication, medical and surgical supplies, involvement of law enforcement,
mutual aid agreements with other facilities, drills and training, surge capacity, decontamination capability,
and pharmaceutical stockpiles. The drill evaluation tool, developed by Johns Hopkins University under
contract from the Agency for Healthcare Research and Quality, was used to assess various aspects of
drill performance, such as the availability of the hospital disaster plan, the geographic configuration of the
incident command center, whether drill participants were identifiable, whether the noise level interfered
with effective communication, and how often key information (eg, number of available staffed floor,
intensive care, and isolation beds; number of arriving victims; expected triage level of victims; number of
potential discharges) was received by the incident command center. Teamwork behaviors in the incident
command center were quantitatively assessed, using the MedTeams analysis of the video recordings
obtained during the disaster drill. Spearman rank correlations of the results between pair-wise groupings
of the 3 assessment methods were calculated.
Results: The 3 evaluation methods demonstrated qualitatively different results with respect to each
hospital’s level of disaster preparedness. The Spearman rank correlation coefficient between the
results of the on-site survey and the video analysis of teamwork was – 0.34; between the results of
the on-site survey and the structured drill evaluation tool, 0.15; and between the results of the video
analysis and the drill evaluation tool, 0.82.
Conclusion: The disparate results obtained from the 3 methods suggest that each measures distinct
aspects of disaster preparedness, and perhaps no single method adequately characterizes overall
hospital preparedness. [Ann Emerg Med. 2008;52:195-201.]
0196-0644/$-see front matter
Copyright © 2008 by the American College of Emergency Physicians.
doi:10.1016/j.annemergmed.2007.10.026
INTRODUCTION
A disaster may be defined as a natural or manmade event
that results in an imbalance between the supply and demand
for resources.1 Events of September 11, 2001, and the
devastation from Hurricanes Katrina and Rita have recently
Volume , . : September
highlighted the importance of hospital disaster preparedness
and response. Previous disasters have demonstrated
weaknesses in hospital disaster management, including
confusion over roles and responsibilities, poor
communication, lack of planning, suboptimal training, and a
Annals of Emergency Medicine 195
Assessing Hospital Disaster Preparedness
Editor’s Capsule Summary
What is already known on this topic
Extremely little is known on how to objectively and
accurately rate hospital disaster preparedness. Scales and
measurements have been developed but not extensively
validated; most evaluations are highly subjective and
subject to bias.
What question this study addressed
At 6 sites, 3 evaluation methods, an onsite predrill
survey, a real-time drill performance rating tool, and a
video teamwork analysis, were used and correlations
among evaluation methods examined.
What this study adds to our knowledge
The 3 methods produced disparate evaluations of
preparedness, suggesting that the instruments are flawed,
they are measuring different things, or both.
How this might change clinical practice
Better assessment tools for hospital disaster preparedness
need to be developed, perhaps beginning with the careful
definition of what aspects of preparedness are to be
measured.
lack of hospital integration into community disaster
planning.2
Despite The Joint Commission’s emphasis on emergency
preparedness for all hospitals, including requirements for having
a written disaster plan and participating in disaster drills, there is
currently no validated, standardized method for assessing
hospital disaster preparedness. This lack of validated assessment
methods may reflect the complex and multifaceted nature of
hospital preparedness.
To be prepared to care for an influx of victims, a hospital
must have adequate supplies, equipment, and space, as well as
the appropriate medical and nonmedical staff. Survey
instruments, either self-administered or conducted on site, may
be used to assess these resources. Although surveys and
questionnaires attempt to capture a hospital’s level of
preparedness through quantifying hospital beds, ventilators,
isolation capacity, morgue space, available modes of
communication, frequency of drills, and other aspects of disaster
preparedness,3-8 it is unclear whether they are reliable or valid
predictors of hospital performance during an actual disaster, or
even during a drill. In contrast to surveys, which assess hospital
resources and characteristics during a period of usual activity,
disaster drills make use of moulaged victims to gauge hospital
preparedness and assess staff interactions in a dynamic
environment in real time.
Although hospitals routinely conduct after-drill debriefing
sessions, during which participants discuss deficiencies
warranting improvement, there is no commonly used and
196 Annals of Emergency Medicine
Kaji, Langford & Lewis
validated method for evaluating hospital performance during
disaster drills. To address this gap, the Johns Hopkins
University Evidence-based Practice Center, with support from
the Agency for Healthcare Research and Quality (AHRQ),
developed a hospital disaster drill evaluation tool.9 The tool
includes separate modules for the incident command center,
triage area, decontamination zone, and treatment areas. In a
recent study, conducted in parallel with the study reported here,
we described the AHRQ evaluation tool’s internal and interrater
reliability.10 We found a high degree of internal reliability in the
instrument’s items but substantial variability in interrater
reliability.10
Recently, evidence has suggested that enhancing teamwork
among medical providers optimizes the provision of health care,
especially in a stressful setting, and some experts working in this
area have adopted the aviation model as a basis for designing
teamwork programs to reduce medical errors.11 In 1998,
researchers from MedTeams, a research corporation that focuses
on observing and rating team behaviors, set out to evaluate the
effectiveness of using aviation-based crew resource management
programs to teach teamwork behaviors in emergency
departments (EDs), conducting a prospective, multicenter,
controlled study.12 The MedTeams study, published in 2002,
demonstrated a statistically significant improvement in the
quality of team behaviors, as well as a reduction in the clinical
error rate, after completion of the Emergency Team
Coordination Course.12
Because effective teamwork and communication are essential
to achieving an organized disaster response, assessing teamwork
behavior may be a key element in a comprehensive evaluation of
hospital disaster response. Evaluating teamwork behaviors
involves the assessment of the overall interpersonal climate, the
ability of team members to plan and problem-solve, the degree
of reciprocity among team members in giving and receiving
information and assistance, the team’s ability to manage
changing levels of workload, and the ability of the team to
monitor and review its performance and improve its teamwork
processes.12 In addition to observing team members in real
time, MedTeams researchers routinely review videotaped
interactions among team members as a method of quantifying
teamwork behaviors.
The objective of our study was to determine the degree of
correlation between 3 measures of assessing hospital disaster
preparedness: an on-site survey, directly observed drill
performance, and video analysis of teamwork behaviors.
MATERIALS AND METHODS
Six 911-receiving hospitals, participating in the annual,
statewide disaster drill in November 2005, agreed to complete
the site survey and undergo the drill evaluation and video
analysis. The selection of the sample of hospitals and their
characteristics has been described previously.10 The drill
scenario included an explosion at a public venue, with multiple
victims. To preserve the anonymity of the hospitals, they are
designated numerically 1 through 6. Because all data were
Volume , . : September
Kaji, Langford & Lewis
deidentified and reported in aggregate, our study was verified as
exempt by the institutional review board of the Los Angeles
Biomedical Research Institute at Harbor–UCLA Medical
Center.
We used an on-site survey (included in Appendix E1,
available online at http://www.annemergmed.com), which
included 79 items focusing on areas previously identified as
standards or evidence of preparedness.1-3,13-28 The survey was a
modification of an instrument we used in a previous study.8
Compared with the original survey instrument, the number of
items was reduced from 117 to 79 by the study investigators to
eliminate items that had limited discriminatory capacity and to
reduce redundancy and workload. Survey items included a
description of the structure of the hospital disaster plan, modes
of intra- and interhospital communication, decontamination
capability and training, characteristics of drills, pharmaceutical
stockpiles, and each facility’s surge capacity (assessed by
monthly ED diversion status, number of available beds,
ventilators, negative pressure isolation rooms, etc). Because a
survey performed in 1994 demonstrated that hospitals were
better prepared when the medical directors of the ED
participated in community planning,27 we also assessed whether
each hospital participated in the local disaster planning
committee. Additional survey items examined mutual aid
agreements with other hospitals and long-term care facilities;
predisaster “preferred” agreements with medical vendors;
protocols for canceling elective surgeries and early inpatient
discharge; the ability to provide daycare for dependents of
hospital staff; the existence of syndromic surveillance systems;
ongoing training with local emergency medical services (EMS)
and fire agencies; communication with the public health
department; and protocols for instituting volunteer
credentialing systems, hospital lockdown, and managing mass
fatality incidents.
The survey was distributed by electronic mail, and between
June 2006 and June 2007, the disaster coordinators at each of
the 6 hospitals completed the survey. The on-site “inspection”
to verify the responses to the 79 item survey was performed by a
single observer (A.H.K.) between June 2006 and June 2007.
During the visit, necessary clarification of responses to the
survey items was obtained, followed by an examination of the
hospital disaster plan, the decontamination shower, the personal
protective equipment, communication systems (eg, walkietalkies and radio system), Geiger counters, the ED, the
laboratory, the pharmacy, and the designated site of the incident
command center.
The possible answers for 71 of the 79 survey items were
assigned a point value. Depending on perceived importance,
items were allocated zero to 1 point, zero to 3 points, or zero to
5 points, with a higher score indicating better preparedness. For
example, for the question, how many patients could you treat
for a nerve agent exposure? the answer “fewer than 10” would
be given a score of zero, the answer “10 to 20” would be given a
score of 1, “20 to 30” would be given a score of 2, and “greater
Volume , . : September
Assessing Hospital Disaster Preparedness
than 30” would be given a score of 3. There were also 8 of 79
questions to which no point value was assigned because the item
was not designed to discriminate between levels of preparedness.
For example, no point value was assigned to the question, have
you ever had to truly implement the incident command
structure? A summary score for overall preparedness was
calculated by summing each of the item scores. The maximum
possible score was 215 (see Appendix E1, available online at
http://www.annemergmed.com).
As described in our recent study and companion article
evaluating the reliability of the drill evaluation tool, 32 trained
medical student observers were deployed to the 6 participating
hospitals to evaluate drill performance using the AHRQ
instrument.9 Two hundred selected dichotomous drill
evaluation items were coded as better versus poorer preparedness
by the study investigators.10 An unweighted “raw performance”
score was calculated by summing these dichotomous indicators.
Although the drill evaluation instrument assesses multiple areas
of the hospital, including triage, decontamination, treatment,
and incident command, we chose to consider only those items
related to the performance of the incident command center
because it was the only drill evaluation module that was applied
at all 6 hospitals, as described in the companion article.10
Moreover, the MedTeams evaluation (see below) was based on
video analysis of the incident command center. We also believed
that a high level of performance in the incident command
center would be correlated with high levels of performance
elsewhere in the hospital.
There were 45 dichotomous items evaluating the incident
command center that could be dichotomously coded as
indicating better or worse preparedness. Examples of drill
evaluation items included whether the incident command
center had a defined boundary zone, the incident commander
took charge of the zone, the incident commander was easily
identifiable, the hospital disaster plan was accessible, and
whether the noise level in the incident command center
interfered with effective communication.
Because of the limited number of observers, 2 hospitals had 1
observer deployed to the incident command center, whereas 4
hospitals had 2 observers. When 2 observers were available, the
average of the 2 scores was calculated.
A professional video company was employed to film activities
at each of the hospitals on the day of the disaster drill. Although
various areas of the hospital were filmed, the predominant focus
was on the incident command center and capturing the
interactions among its members. The videos were edited,
transferred to DVDs, and sent to MedTeams, whose staff were
blinded to the drill and site survey results, for analysis and
scoring of teamwork behaviors.
To assess teamwork behaviors, MedTeams uses a team
dimension rating scale based on the 5 team dimensions of the
behaviorally anchored rating scales and an overall score, which is
a mean of the 5 team dimensions. The range of possible scores
for each of the team dimensions was 1 to 7. “Team dimension
Annals of Emergency Medicine 197
Assessing Hospital Disaster Preparedness
rating” is the term applied to the process of observing team
behavior and assigning ratings to each of the 5 behaviorally
anchored rating scale team dimensions.29
Each team dimension has specific criteria that are used for
scoring purposes. The first team dimension assesses how well the
team structure was constructed and maintained. For example,
the observer is asked to rate how efficiently the leader assembled
the team, assigned roles and responsibilities, communicated
with each of the team members, acknowledged contributions of
team members to team goals, demonstrated mutual respect in all
communications, held everyone accountable for team outcomes,
addressed professional concerns, and resolved conflicts
constructively.29
The second team dimension assesses planning and problemsolving capability. Observations include whether team members
were engaged in the planning and decisionmaking process,
whether protocols were established to develop a plan, whether
team members were alerted to potential biases and errors, and
how errors were avoided and corrected.29
The third team dimension evaluates team communications.
Observations include whether situational awareness updates
were provided, whether a common terminology was used,
whether the transfer of information was verified, and whether
decisions were communicated to team members.29
The fourth team dimension assesses the management of team
workload. The observer records whether there was a teamestablished plan to redistribute the workload, integrating
individual assessments of patient needs, overall caseload, and
updates from actions of team members.29
The final team dimension describes team improvement skills.
Recorded observations include whether there were shift reviews
of teamwork, whether teamwork considerations were included
in educational forums, and whether situational learning and
teaching were incorporated into such forums.29
Although behaviorally anchored rating scale descriptions
specify distinct clusters of teamwork behaviors, there is some
inevitable overlap across the 5 team dimensions. The
behaviorally anchored rating scale describes concrete and
specific behaviors for each team dimension and provides anchors
for the lowest, middle, and highest values (standards of
judgment). Additionally, the behaviorally anchored rating scale
delineates criteria for assigning a numeric value to the rater’s
judgment, and each of the 5 dimensions is rated on a numeric
scale of 1 to 7, in which 1 is very poor and 7 is deemed
superior.29
Primary Data Analysis
Data obtained from the on-site survey and drill evaluation
tool were recorded on data collection forms. All data were stored
in an Access database (Access 2003; Microsoft Corporation,
Redmond, WA). The database was translated into SAS format
using DBMS/Copy (DataFlux Corporation, Cary, NC). The
statistical analysis was performed using SAS, version 9.1 (SAS
Institute, Inc., Cary, NC), and Stata, version 9.2 (StataCorp,
College Station, TX).
198 Annals of Emergency Medicine
Kaji, Langford & Lewis
Table. Results of 3 methods of assessing hospital disaster
drill performance.
Hospital
Number*
On-site Survey
(1–215) (%)†
Modified AHRQ
Score in ICC
(1–45) (%)†
MedTeams ICC Score
(1–7) (%)†
1
2
3
4
5
6
155 (72)
155 (72)
186 (87)
159 (74)
166 (77)
152 (71)
31 (69)
19 (42)
27 (60)
34 (76)
24 (53)
26 (58)
5 (71)
4 (57)
4.8 (69)
5 (71)
4.2 (60)
5 (71)
ICC, Incident command center.
*Note that there was only 1 observer deployed to the ICC at hospitals 1 and 4,
whereas the remaining 4 hospitals had 2 observers simultaneously deployed to
the ICC, and the score represents the average of the 2 scores.
†
Percentage of maximum possible score for that assessment method.
Because the results from each of the 3 evaluation methods
could not be assumed to be normally distributed, pair-wise
nonparametric Spearman rank correlation coefficients were
calculated to assess the relationship between the results of the
on-site survey, the drill evaluation tool, and the video analysis of
teamwork behaviors.
RESULTS
The summary on site survey scores is shown in the Table. All
hospitals had a disaster plan that was based on the Hospital
Incident Command System, a policy to cancel elective surgery
and for early discharge to make room for incoming disaster
victims. Five of the 6 (83%) hospitals had a protocol for
hospital lockdown, involved the local police department in the
plan, and had a volunteer credentialing policy. Only 3 of the 6
(50%) had a protocol to provide daycare for children of hospital
staff, as well as a designated overflow area for victims in the
plan. Although all hospitals had mutual aid or “preferred”
agreements with vendors, only 2 of the 6 hospitals (33%) had a
mutual aid agreement with a long-term care facility, whereas 5
of 6 (83%) had agreements with other hospitals. In terms of
surge capacity, 5 of the 6 (83%) had a licensed bed capacity
greater than 200 and the ability to create isolation beds, and
only 1 (17%) stated that they were on ambulance diversion
greater than 20% of the time. Yet, 3 (50%) hospitals stated that
they had fewer than 10 isolation rooms, and 4 (67%) were
affected by the nursing shortage. All the hospitals had EMScompatible radios, walkie-talkies, availability of HAM radios,
level C personal protective equipment, a warm water
decontamination shower, Geiger counters, and an antibiotic
stockpile. Five (83%) had greater than 3 days’ worth of hospital
supplies, a chemical antidote stockpile, and a surveillance system
in place. Four hospitals (67%) stated that they would be able to
obtain 5 more ventilators and had greater than 20 ventilators on
hand. All hospitals stated that they conducted drills with
multiple agencies, and 5 (83%) conducted at least biannual
decontamination training for staff.
The raw drill performance scores for the incident command
center ranged from 19 of 94 (20%) to 34 of 94 (36%). The
Volume , . : September
Kaji, Langford & Lewis
Assessing Hospital Disaster Preparedness
Figure. Pair-wise comparisons of the 3 methods of disaster preparedness. Note the expanded scales used for each
preparedness measure.
various scores of each of the hospitals are listed in the Table.
The MedTeams team dimension rating overall average scores
ranged from 4.2 to 4.8. The specific team dimension rating
range for each of the 5 dimensions was as follows: 3 to 6 for
maintaining team structure and climate, 3 to 6 for planning and
problem-solving ability, 4 to 6 for team communications, 4 to 5
for workload management, and 4 to 5.8 for the ability to
improve team skills (Table).
Pair-wise comparisons for the 3 evaluative methods are
shown in the Figure. Spearman correlation results for the pairwise comparisons were as follows: 0.14 (between the on-site
survey and the drill evaluation tool), ⫺0.33 (between the onsite survey and the video analysis), and 0.82 (between the drill
evaluation tool and the video analysis). Although we believe our
general observation that there is highly variable correlation
between results from the 3 methods for assessing hospital
preparedness, we cannot define the pair-wise correlations with
much precision, given our limited sample size of hospitals.
hospital preparedness, we could not define the pair-wise
correlation coefficients with much precision.
Our revised 79-item survey instrument has not been
validated, although the questions were created from a review of
the existing literature and had been used previously. The survey
results depend on the accuracy of the disaster coordinators, and
there is a possibility that the respondents answered in such a
way to appear better prepared. However, we believe the on-site
verification of survey results diminishes this possibility. The
AHRQ disaster drill evaluation tool also has not been validated
against hospital performance in a real disaster, and its interrater
and internal reliability have only been assessed in our previous
investigation.10 Finally, the results from the drill evaluation and
the video analysis were focused only on the incident command
center because we assumed that a high level of performance in
the incident command center would be correlated with high
levels of performance elsewhere in the hospital.
DISCUSSION
LIMITATIONS
Our study has a number of limitations. The sample size of 6
hospitals is small, and self-selection bias is likely.10 The survey
was conducted in Los Angeles County, with its unique hazards,
which limits the generalizability of the results. Because of the
small sample size, confidence intervals for the Spearman rank
correlation coefficients could not be determined, and thus,
although we believe our general observation that there is limited
correlation between the results of 3 methods for assessing
Volume , . : September
There is no standard evaluation method for assessing hospital
disaster preparedness. Despite The Joint Commission
requirement for hospital drills to be conducted yearly, there is
no evidence demonstrating that a certain type of drill, or that
practicing drills at all, improves hospital preparedness.30 Our
results demonstrate highly variable correlations between 3
evaluative methods, and this suggests that each method may be
assessing different dimensions of hospital disaster preparedness.
The video analysis focuses on evaluating teamwork behaviors,
Annals of Emergency Medicine 199
Assessing Hospital Disaster Preparedness
whereas the on-site survey emphasizes whether a hospital has the
appropriate supplies, equipment, and staff. The drill evaluation
tool incorporates items that attempt to assess the teamwork
behaviors of drill participants, as well as the adequacy of
supplies, equipment, and staff at the hospital. The correlation
between the drill evaluation instrument results and the video
analysis was the highest of the 3 pair-wise comparisons. Perhaps
this is because both of these instruments assess aspects of
communication and teamwork behaviors, whereas the survey
instrument focuses on quantifying the supply, equipment, and
personnel resources of the hospital.
The 3 evaluative methods also demonstrate limited
discriminatory capability to assess disaster preparedness. In fact,
3 hospitals obtained the same overall score by the MedTeams
evaluator. Identical scores were also obtained when the complete
on-site survey was used to assess preparedness.8 The drill
evaluation tool appeared to have the best discriminatory
capability, at least in that there were no identical scores. All our
hospitals are in the same geographic area, and there are likely
regional standards that are common to all. This may make
consistent discrimination difficult.
Given the lack of standards for preparedness, developing a
universally accepted tool to assess hospital disaster preparedness is a
daunting task. Perhaps our findings can be used as a basis for a
more comprehensive approach that reflects both communication
and teamwork behaviors, as well as a quantitative assessment of
surge capacity, supplies, and equipment. The correlation analysis
suggests that the video analysis and drill evaluation tool had the
greatest degree of similarity. One solution may therefore be to
combine items from the tools with the least redundancy, such as
the drill evaluation tool and the written on-site survey or the sitesurvey and the video analysis, to create a multidimensional,
comprehensive evaluation instrument. Future study will be
necessary to determine which items have the greatest internal
reliability and interrater reliability, after which the tool will require
pilot testing and validation.
Among a cohort of 6 hospitals in Los Angeles County, CA,
participating in a regional disaster drill, an on-site survey, the
AHRQ disaster drill evaluation tool, and the MedTeams video
analysis of teamwork behavior demonstrated little consistency,
suggesting that each may measure a different aspect of hospital
preparedness.
We wish to thank the following personnel and entities for
support of this work: Agency for Healthcare Research and
Quality grant 1 F32 HS013985; Emergency Medical
Foundation Research Fellowship Grant; an unrestricted grant
from ARCO Corporation; the hospital disaster coordinators from
each participating hospital, and the members of Amy Kaji’s
epidemiology doctoral dissertation committee for their guidance:
Robert Kim-Farley, MD, MPH; Jorn Olsen, MD, PhD; and
Scott Layne, MD. MedTeams is a registered trademark of
Dynamics Research Corporation, Andover, MA.
200 Annals of Emergency Medicine
Kaji, Langford & Lewis
Supervising editor: Jonathan L. Burstein, MD
Author contributions: AHK and RJL conceived and designed
the study and obtained research funding. AHK and RJL
supervised the conduct of the data collection. AHK undertook
recruitment of participating centers and managed the data.
AHK and RJL analyzed the data from the on-site survey and
the disaster drill. VL analyzed the data for teamwork behaviors
from the video. AHK drafted the article, and all 3 authors
contributed substantially to its revision. All authors had full
access to the data and take full responsibility for the integrity
of the data and the accuracy of the data analysis. AHK takes
responsibility for the paper as a whole.
Funding and support: By Annals policy, all authors are required
to disclose any and all commercial, financial, and other
relationships in any way related to the subject of this article,
that might create any potential conflict of interest. The authors
have stated that no such relationships exist. See the
Manuscript Submission Agreement in this issue for examples
of specific conflicts covered by this statement.
Publication dates: Received for publication August 11, 2007.
Revision received October 4, 2007. Accepted for publication
October 29, 2007. Available online January 11, 2008.
Reprints not available from the authors.
Address for correspondence: Amy H. Kaji, MD, MPH,
Department of Emergency Medicine, Harbor–UCLA Medical
Center, 1000 West Carson Street, Box 21, Torrance, CA
90509; 310-222-3500, fax 310-782-1763; E-mail
akaji@emedharbor.edu.
REFERENCES
1. Noji E. Disaster epidemiology. Emerg Med Clin North Am. 1996;
14:289-300.
2. Waeckerle J. Disaster planning and response. N Engl J Med.
1991;324:815-821.
3. Higgins W, Wainright C, Lu N, et al. Assessing hospital
preparedness using an instrument based on the Mass Casualty
Disaster Plan checklist: results of a statewide survey. Am J Infect
Control. 2004;32:327-332.
4. Agency for Healthcare Research and Quality. AHRQ unveils
hospital bioterrorism preparedness tool. Available at:
http://www.ahrq.gov/news/press/pr2002/bioterrpr.htm.
Accessed July 19, 2007.
5. Greenboro MO, Jurgens SM, Gracely EJ. Emergency department
preparedness for the evaluation and treatment of victims of biological or
chemical terrorist attack. J Emerg Med. 2002;22:273-278.
6. Treat KN, Williams JM, Furbee PM, et al. Hospital preparedness
for weapons of mass destruction incidents: an initial assessment.
Ann Emerg Med. 2001;38:562-565.
7. Ghilarducci DP, Pirrallo RG, Hegman KT. Hazardous materials
readiness in the United States level 1 trauma centers. J Occup
Environ Med. 2000;42:683-692.
8. Kaji AH, Lewis RJ. Hospital disaster preparedness in Los Angeles
County. Acad Emerg Med. 2006;13:1198-1203.
9. Agency for Healthcare Research and Quality. Evaluation of hospital
disaster drills: a module-based approach. Prepared for the Agency
for Healthcare Research and Quality, contract No. 290-02-0018, and
prepared by the Johns Hopkins University Evidence-based Practice
Center, the Johns Hopkins University Bloomberg School of Public
Health, and the Johns Hopkins University Applied Physics Laboratory.
Volume , . : September
Kaji, Langford & Lewis
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Available at: http://www.ahrq.gov/research/hospdrills/hospdrill.htm.
Accessed September 26, 2007.
Kaji AH, Lewis RJ. Assessment of the Johns Hopkins/AHRQ
hospital disaster drill evaluation tool. Andover, MA: Dynamics
Research Corporation; 2007.
Sherwood G, Thomas E, Bennett DS, et al. A teamwork model to
promote patient safety in critical care. Crit Care Nurs Clin North
Am. 2002;14:333-340.
Morey JC, Simon R, Jay GD, et al. Error reduction and
performance improvement in the emergency department through
formal teamwork training: evaluation results of the MedTeams
project. Health Serv Res. 2002;37:1553-1581.
Braun BI, Darcy L, Divi C, et al. Hospital bioterrorism
preparedness linkages with the community: improvements over
time. Am J Infect Control. 2004;32:317-326.
Auf der Heide E. Disaster planning, part II: disaster problems,
issues, and challenges identified in the research literature. Emerg
Med Clin North Am. 1996;14:453-480.
Auf der Heide E. The importance of evidence-based disaster
planning. Ann Emerg Med. 2006;47:34-46.
Hsu EB, Jenckes MW, Catlett CL, et al. Effectiveness of hospital
staff mass-casualty incident training methods: a systematic
literature review. Prehosp Disaster Med. 2004;19:191-199.
Murphy JK. After 9/11: priority focus areas for bioterrorism
preparedness in hospitals. J Healthc Manag. 2004;49:227-235.
Quarantelli EL. Delivery of Emergency Medical Care in Disasters:
Assumptions and Realities. New York, NY: Irvington Publishers; 1983.
Wetter D, Daniell W, Treser CD. Hospital preparedness for victims
of chemical or biological terrorism. Am J Public Health. 2001;91:
710-716.
Cone DC, Weir SD, Bogucki S. Convergent volunteerism. Ann
Emerg Med. 2003;41:457-462.
Volume , . : September
Assessing Hospital Disaster Preparedness
21. Bissell RA, Becker BM, Burkle FJ Jr. Health care personnel in
disaster response: reversible roles or territorial imperatives?
Emerg Med Clin North Am. 1996;14:267-288.
22. Greenberg MO, Jurgens SM, Gracely EJ. Emergency department
preparedness for the evaluation and treatment of victims of
biological or chemical terrorist attack. J Emerg Med. 2002;22:
273-278.
23. Treat KN, Williams JM, Furbee PM, et al. Hospital preparedness
for weapons of mass destruction incidents: an initial
assessment. Ann Emerg Med. 2001;38:562-565.
24. Levitin HW, Sieglson HJ. Hazardous materials. Disaster medical
planning and response. Emerg Med Clin North Am. 1996;14:327348.
25. Schultz CH, Mothershead JL, Field M. Bioterrorism preparedness.
I: The emergency department and hospital. Emerg Med Clin North
Am. 2002;20:437-455.
26. Ridge T. The critical role of hospitals involved in national
bioterrorism preparedness. J Healthc Prot Manage. 2002;18:39-48.
27. Landesman LY, Markowitz SB, Rosenberg SN. Hospital
preparedness for chemical accidents: the effect of environmental
legislation on healthcare services. Prehosp Disaster Med. 1994;
9:154-159.
28. Keim ME, Pesik N, Twum-danso NA. Lack of hospital
preparedness for chemical terrorism in a major US city: 19962000. Prehosp Disaster Med. 2003;18:193-199.
29. Barrett J, Bondaruk J. MedTeams Performance Evaluation Course
Guide TDR/BARS. Andover, MA: Dynamics Research Corporation;
2004.
30. Jasper E, Sweeney B, Williams E, et al. Value of an unannounced
drill in preparing hospitals for a terrorism attack or other mass
casualty event [abstract]. Acad Emerg Med. 2004;11:562.
Annals of Emergency Medicine 201
APPENDIX E1.
Hospital disaster preparedness survey with point values.
201.e1 Annals of Emergency Medicine
Volume , . : September
Volume , . : September
Annals of Emergency Medicine 201.e2
201.e3 Annals of Emergency Medicine
Volume , . : September
Volume , . : September
Annals of Emergency Medicine 201.e4
201.e5 Annals of Emergency Medicine
Volume , . : September
Volume , . : September
Annals of Emergency Medicine 201.e6
201.e7 Annals of Emergency Medicine
Volume , . : September
49. How many negative pressure isolation beds do you have at your hospital?
a. None - 0
b. 1-10 - 1
c. 10-20 - 2
d. >20 - 3
e. Don’t know- .
f. Missing - .
50. How would you rate your hospital laboratory’s ability to identify specimens of bioterrorism? NO
SCORE
a. very poor
b. poor
c. fair
d. good
e. very good
f. don’t know/missing
51. How would you rate your hospital’s ability to manage victims of bioterrorism? NO SCORE
a. very poor
b. poor
c. fair
d. good
e. very good
f. don’t know/missing
52. How would you rate your hospital’s ability to manage victims of disasters, in general? NO
SCORE
a. very poor
b. poor
c. fair
d. good
e. very good
f. don’t know/missing
53. What is your average daily in-patient census? NO SCORE
a. less than 50
b. between 50-200
c. between 200-400
d. between 400-600
e. greater than 600
f. don’t know/missing
54. How many employees does your hospital have? NO SCORE
a. less than 200
b. between 200-500
c. between 500-2000
d. greater than 2000
e. don’t know/missing
Volume , . : September
Annals of Emergency Medicine 201.e8
201.e9 Annals of Emergency Medicine
Volume , . : September
Volume , . : September
Annals of Emergency Medicine 201.e10
201.e11 Annals of Emergency Medicine
Volume , . : September
Volume , . : September
Annals of Emergency Medicine 201.e12
Purchase answer to see full
attachment