MMHA 6700 Walden University Healthcare Operations Management Essay

User Generated

yvfnubqtfba17

Business Finance

MMHA 6700

Walden University

MMHA

Description

Discussion: Performance Improvement

Clearly the criteria, discipline, and focus that underlie the Baldrige process have been key contributors to our daily improvements. The feedback we received … has been instrumental in providing a clear road map for the journey.

—Michael Murphy, CEO, 2007 Baldrige Award recipient, Sharp HealthCare

Incremental change is how a culture of continuous improvement creates a pattern of success. In the Discussion for Week 4, you examined an organization’s actions in relation to criteria of high reliability and recommended steps for improvement. This week, you will revisit your recommendations.

For this Discussion:

Read this week’s articles on the standards for high reliability, then review the Week 4 Discussion. Draw comparisons between your organization and others, and examine any feedback provided by your peers.

Post a cohesive response to the following:

With the feedback and this week’s readings in mind, reexamine the steps for improvement of high reliability which you suggested for your organization. Would you change your recommendations in light of what you have learned in this course? Could your recommended steps be expanded or refined?

Support your response by identifying and explaining key points and/or examples presented in the Learning Resources.

Week 4 Discussion Post:

My healthcare organization is highly reliable, attributable to its scheduling and staffing processes. The organization management uses computer technologies to manage staffing processes by ensuring that every healthcare worker has a pager/cell phone to alert them of when to attend to diverse patient needs. Every time there is an inflow of patients into the Emergency Room, the technicians immediately page the necessary personnel, who are alerted to the emergency and attend to patients efficiently. The management also uses technologies to monitor operations and the performance of personnel across all the departments. Monitoring technologies have played a significant role in helping all healthcare personnel became more attentive towards their patients, and has minimized medical errors. Besides, the organization uses scheduling to attend to the most critical patients and to ensure the availability of resources across all areas of care.

The health organization is also committed to safety by ensuring that the facilities are kept clean and well-ventilated at all times. In light of Covid-19, every entrance to the hospital is equipped with running water, hand sanitizes, and gloves to reduce the spread of infections. Additionally, there are healthcare workers committed to checking on patients in the ICU regularly, changing their bandages, and cleaning them to stop the onset of Hospital Associated infections. The organization also has a safety policy, where all personnel understand that the safety of the patients is always priority.

However, there are areas in the organization that need improvement. For instance, there is a need for a more defined communication system to allow for effective reporting of issues in the hospitals. Some hospital personnel do not report errors or issues with the patients to their supervisors, and as a result, some patient needs are ignored. Newer hospital employees, especially the interns, are afraid of reporting errors or other issues to their attending supervisors due to the fear of being reprimanded. Failure to report problems in the hospital can lead to poor quality of care for the patients (Chassin & Loeb, 2013). The management should also focus on establishing a culture of safety by holding all employees accountable for their actions. The organization has reported many medical errors, such as missed diagnoses and misdiagnoses over the years. Most of the medical errors are attributable to a lack of proper patient care by the healthcare professionals. Therefore, the management should hold every individual accountable for their actions regarding every patient in the hospital (Chassin & Loeb, 2013). They must explain their reasons for the errors and be trained on how to provide quality patient care.

Reference

Chassin, M. R., & Loeb, J. M. (2013). High‐reliability health care: getting there from here. The Milbank Quarterly, 91(3), 459-490.

Resources:

McLaughlin, D. B., & Olson, J. R. (2017). Health care operations management (3rd ed.). Chicago, IL: Health Administration Press.

  • Chapter 14, “Improving Financial Performance with Operations Management” (pp. 369-387)

https://journals.lww.com/journalacs/pages/default.aspx

https://www.tandfonline.com/doi/full/10.1080/14783363.2014.934519

https://www.nist.gov/baldrige

Unformatted Attachment Preview

Self-Analysis Worksheet For Use with the Criteria for Performance Excellence, Education Criteria for Performance Excellence, Health Care Criteria for Performance Excellence, or Baldrige Excellence Builder Insights gained from external examiners or reviewers are always helpful, but you know your organization. You are in an excellent position to identify your organization’s key strengths and key opportunities for improvement (OFIs). • Complete your responses, or have a team create responses, to the questions in the seven Baldrige Criteria categories found in the Baldrige Excellence Framework booklet or the Baldrige Excellence Builder. Identify one or two strengths and one or two OFIs for each Criteria category, and record them on this worksheet. For strengths and OFIs of high importance, use the worksheet to create and communicate an action plan for improvement. • • Criteria category 1 Leadership Strength 1. 2. OFI 1. 2. 2 Strategy Strength 1. 2. OFI Importance High, medium, low For High-Importance Areas Stretch (strength) or improvement (OFI) goal What action is planned? By when? Who is responsible? Criteria category Importance High, medium, low For High-Importance Areas Stretch (strength) or improvement (OFI) goal 1. 2. 3 Customers Strength 1. 2. OFI 1. 2. 4 Measurement, Analysis, and Knowledge Management Strength 1. 2. OFI 1. 2. 5 Workforce Strength 1. 2. OFI What action is planned? By when? Who is responsible? Criteria category 1. 2. 6 Operations Strength 1. 2. OFI 1. 2. 7 Results Strength 1. 2. OFI 1. 2. Importance High, medium, low For High-Importance Areas Stretch (strength) or improvement (OFI) goal What action is planned? By when? Who is responsible? At the Intersection of Health, Health Care and Policy Cite this article as: Christopher R. Friese, Rong Xia, Amir Ghaferi, John D. Birkmeyer and Mousumi Banerjee Hospitals In 'Magnet' Program Show Better Patient Outcomes On Mortality Measures Compared To Non-'Magnet' Hospitals Health Affairs, 34, no.6 (2015):986-992 doi: 10.1377/hlthaff.2014.0793 The online version of this article, along with updated information and services, is available at: http://content.healthaffairs.org/content/34/6/986.full.html For Reprints, Links & Permissions: http://healthaffairs.org/1340_reprints.php E-mail Alerts : http://content.healthaffairs.org/subscriptions/etoc.dtl To Subscribe: http://content.healthaffairs.org/subscriptions/online.shtml Health Affairs is published monthly by Project HOPE at 7500 Old Georgetown Road, Suite 600, Bethesda, MD 20814-6133. Copyright © 2015 by Project HOPE - The People-to-People Health Foundation. As provided by United States copyright law (Title 17, U.S. Code), no part of Health Affairs may be reproduced, displayed, or transmitted in any form or by any means, electronic or mechanical, including photocopying or by information storage or retrieval systems, without prior written permission from the Publisher. All rights reserved. Not for commercial use or unauthorized distribution Downloaded from content.healthaffairs.org by Health Affairs on December 11, 2015 by Health Affairs Improving Care By Christopher R. Friese, Rong Xia, Amir Ghaferi, John D. Birkmeyer, and Mousumi Banerjee 10.1377/hlthaff.2014.0793 HEALTH AFFAIRS 34, NO. 6 (2015): 986–992 ©2015 Project HOPE— The People-to-People Health Foundation, Inc. doi: Christopher R. Friese (cfriese@umuch.edu) is an assistant professor in the School of Nursing at the University of Michigan, in Ann Arbor. Rong Xia is a doctoral student in biostatistics in the School of Public Health at the University of Michigan. Amir Ghaferi is an assistant professor in the Department of Surgery and the Ross School of Business at the University of Michigan. John D. Birkmeyer is executive vice president at Enterprise Support Services and chief academic officer at the Dartmouth-Hitchcock Medical Center, in Hanover, New Hampshire. Mousumi Banerjee is a research professor of biostatistics in the School of Public Health at the University of Michigan. Hospitals In ‘Magnet’ Program Show Better Patient Outcomes On Mortality Measures Compared To Non-‘Magnet’ Hospitals ABSTRACT Hospital executives pursue external recognition to improve market share and demonstrate institutional commitment to quality of care. The Magnet Recognition Program of the American Nurses Credentialing Center identifies hospitals that epitomize nursing excellence, but it is not clear that receiving Magnet recognition improves patient outcomes. Using Medicare data on patients hospitalized for coronary artery bypass graft surgery, colectomy, or lower extremity bypass in 1998–2010, we compared rates of risk-adjusted thirty-day mortality and failure to rescue (death after a postoperative complication) between Magnet and non-Magnet hospitals matched on hospital characteristics. Surgical patients treated in Magnet hospitals, compared to those treated in non-Magnet hospitals, were 7.7 percent less likely to die within thirty days and 8.6 percent less likely to die after a postoperative complication. Across the thirteen-year study period, patient outcomes were significantly better in Magnet hospitals than in non-Magnet hospitals. However, outcomes did not improve for hospitals after they received Magnet recognition, which suggests that the Magnet program recognizes existing excellence and does not lead to additional improvements in surgical outcomes. H ealth policy makers in the United States have placed increased emphasis on publicly identifying hospitals with superior outcomes.1 Amid increased competition for patients and payers, hospital executives face the daunting tasks of ensuring highquality care, retaining qualified staff, and marketing their facilities. Patients express increased interest in using quality rankings to select hospitals for surgical care. Thus, hospital executives seek external recognition such as that provided in the rankings of US News and World Report2 and other organizations, including the Leapfrog Group,3 the Baldrige program,4 and Truven Health Analytics.5 There is little overlap in these rankings, 986 Health A ffairs June 2015 3 4: 6 which adds to confusion over how consumers can reliably assess hospital quality.6 The Magnet Recognition Program of the American Nurses Credentialing Center is one initiative designed to identify health care facilities with a commitment to quality improvement, especially in terms of nursing care delivery.7 The voluntary program was established in 1994 by a subsidiary of the American Nurses Association. Hospitals that participate in the program pay a fee for the recognition process, which includes rigorous documentation and site visits to assess adherence to five key principles: transformational leadership, a structure that empowers staff, an established professional nursing practice model, support for knowledge generation and application, and robust quality improvement mecha- Downloaded from content.healthaffairs.org by Health Affairs on December 11, 2015 by Health Affairs nisms.8 Magnet recognition lasts for four years. Magnet hospitals have improved nursing job outcomes, such as burnout and satisfaction,9 and Magnet recognition has been associated with improved hospital financial performance.10 According to the program’s website, one stated benefit of recognition is to “improve patient care.”11 In March 2015, 402 facilities in the United States were recognized by the Magnet program. The number of facilities that apply for recognition but do not receive it is not a matter of public record. Studies have reported better patient outcomes in Magnet hospitals. Three cross-sectional studies have identified favorable outcomes for Medicare discharges,12 neonates,13 and surgical patients.14 However, it is unclear whether or not the successful pursuit of Magnet recognition leads to improved patient outcomes. Despite the favorable outcomes observed in these crosssectional studies, lingering questions remain. First, these findings have not been replicated with nationally representative longitudinal data. Second, previous studies12–14 have not determined whether patient outcomes improve after Magnet recognition is obtained. To address these questions, we investigated patient outcomes in Magnet and non-Magnet hospitals over time. In addition, we examined outcomes in Magnet hospitals both before and after they received recognition. The study was deemed exempt from review by the University of Michigan’s Institutional Review Board. Study Data And Methods We analyzed Medicare inpatient claims files for the years 1998–2010 to assemble three cohorts of surgical patients: those who had coronary artery bypass graft surgery, colectomy, or lower extremity bypass.15 These operations were selected to reflect variation in baseline mortality risk and because they are performed frequently in acute care hospitals.We excluded patients who died on the admission date. The complete sample consisted of 5,057,255 patients who were ages sixty-five and older, enrolled in fee-for-service Medicare, and treated in one of 5,222 hospitals during the thirteen-year study period. After matching each Magnet hospital with two non-Magnet hospitals from the Medicare sample, we restricted our analyses to 1,897,014 patients who were treated in one of 993 hospitals. Details about the matching are provided in the online Appendix.16 Thirty-Day Mortality And Failure To Rescue Thirty-day mortality was defined as all-cause mortality within thirty days of the hospital admission date. Failure to rescue is a death within thirty days of hospital admission for patients who also experienced a postoperative complication.17,18 Multiple definitions of failure to rescue are available. To be consistent with our previous work,18 we used International Classification of Diseases, Ninth Revision (ICD-9), and Current Procedural Terminology codes to identify the presence of nine postoperative complications (pulmonary failure, pneumonia, myocardial infarction, venous thromboembolism, acute renal failure, hemorrhage, surgical site infection, gastrointestinal bleed, and reoperation). Patients with these codes up to ninety days before the admission were excluded from consideration. Failure to rescue is an attractive quality-of-care measure because it focuses less on the occurrence of a complication and more on the hospital’s capability to recognize and address a complication. Our team’s previous work suggests that, compared to complication rates, failureto-rescue rates are more closely associated with differences in hospital characteristics.18 Failure to rescue is also considered a sensitive measure of nursing care delivery.19 Magnet Hospital Recognition The primary exposure variable was a hospital’s recognition by the ANCC Magnet Recognition Program.7 The program’s website was used to identify hospitals that obtained Magnet recognition and the year or years that the recognition was received. We identified each hospital’s Medicare National Provider Identifier using the National Plan and Provider Enumeration System.20 Hospitals were classified as having Magnet recognition in the corresponding year of analysis, having Magnet recognition at any time during the study period, or not having Magnet recognition at any time during the study period. Magnet classifications were updated each year of the study period in the case of mergers or closures. Hospital Characteristics Hospital characteristics were obtained from the Medicare Provider of Service files and the Healthcare Cost Report Information System. We used the following seven hospital characteristics to account for differences between Magnet and non-Magnet hospitals: geographic location (urban or rural, determined by whether or not the hospital was in a Metropolitan Statistical Area), presence or absence of a transplant program, teaching status (determined by whether or not the hospital employed medical residents or fellows), size (the number of staffed beds), outpatient share of revenue (calculated as the amount of revenue obtained from outpatient billing, divided by the revenue obtained from the sum of outpatient and inpatient billing), cost-to-charge ratio (calculated as the total costs for inpatient acute care June 2015 34:6 Downloaded from content.healthaffairs.org by Health Affairs on December 11, 2015 by Health Affairs Health Affa irs 987 Improving Care reported to Medicare, divided by the charges submitted in the same fiscal year), and nurse staffing (registered nurse hours per patient day, adjusted for each hospital’s outpatient share).21 For the last characteristic, the adjustment reduced the variation in reported staffing levels for facilities with large on-site ambulatory practices that skewed inpatient nurse staffing values. Patient Characteristics For Risk Adjustment Diagnosis and procedure codes and demographic variables were used for severity-ofillness adjustment. Using methods published by Anne Elixhauser and coauthors,22 we identified comorbid conditions from diagnosis and procedure codes. Age, sex, race/ethnicity, operation performed, number of comorbid conditions reported, and the presence of twenty-nine specific comorbid conditions were included in all models. Statistical Analysis We linked the list of Magnet hospitals to the patient claims and hospital characteristics data by matching Medicare provider identifier and year. All patient data were included if the hospital had relevant operations in the corresponding study year. We first conducted analyses at the patient level to understand the outcomes from the population perspective. We then conducted hospital-level analyses to examine these issues from the perspective of hospital and health system leaders. Patient-Level Analyses For thirty-day mortality, we analyzed the sample of 1,897,014 patients. For failure to rescue, we studied a subsample of 669,158 patients who experienced a postoperative complication; these patients were treated in 984 hospitals. We used generalized estimating equations to examine the likelihood of both thirty-day mortality and failure to rescue for hospitals that had ever received Magnet recognition and those that never received it. All models were adjusted for patient characteristics, hospital characteristics, and year of operation. To reduce differences in hospital characteristics between Magnet and non-Magnet hospitals, we used a propensity score model that compared each Magnet hospital’s outcomes with those of two non-Magnet hospitals that were most closely matched on the seven hospital characteristics listed above. Details on the modeling strategies and adjustments are available in the Appendix.16 Hospital-Level Analyses We analyzed patient outcomes across hospitals to assess hospital outcomes over time and to determine whether outcomes improved after hospitals received Magnet recognition. For each study year, riskadjusted patient outcomes were averaged for each hospital. We plotted the risk-adjusted outcome rates for Magnet and non-Magnet hospi988 H e a lt h A f fai r s June 2015 34:6 Patients are well served in choosing Magnet hospitals for their surgical care. tals across all thirteen study years. First, we examined all Magnet hospitals and their matched controls across all study years. For each study year, we identified hospitals that achieved Magnet recognition in that year and their matched controls. We compared outcomes for the years before, during, and after Magnet recognition (and the comparable observation years for non-Magnet control hospitals). Finally, we examined outcomes for patients treated in all Magnet hospitals to determine whether outcomes were better if the hospital was recognized during the year of the operation than if the hospital was not recognized at that time. Sensitivity Analyses To increase confidence in our presented findings, we conducted seven sets of sensitivity analyses. First, we replicated our model using Jeffrey Silber and coauthors’ definition of failure to rescue.17 Second, in contrast to the approach we present here, in which Magnet hospitals were matched to non-Magnet hospitals, we replicated our findings by using all hospitals in the national Medicare sample. Third, we replicated our findings by using all Magnet hospitals and five non-Magnet matched control hospitals for each Magnet hospital instead of two. Fourth, we replicated our patient-level models using a mixed modeling approach.23 Fifth, we ran the analyses separately for the three operations. Sixth, instead of treating the hospital as a fixed effect in the hospital-level analyses, we treated it as a random effect in mixed models. Seventh, we examined four additional operations (carotid endarterectomy, aortic valve repair, abdominal aortic aneurysm repair, and mitral valve repair). None of the estimates obtained in the sensitivity analyses differed appreciably from those reported here. Limitations This study has several limitations. Between 1998 and 2010, the Magnet recognition program has undergone criteria changes. We could not account for the various changes and external factors that might influence receipt of Magnet recognition. Despite careful risk adjustment, there were Downloaded from content.healthaffairs.org by Health Affairs on December 11, 2015 by Health Affairs unmeasured differences in hospitals (such as staff perceptions of their work environment and intensive care unit availability), and characteristics of patients (for example, physiological variables) that might influence postoperative outcomes. In addition, results from Medicare claims might not be generalizable to other populations. The propensity score model matched each Magnet hospital with two similar hospitals. However, there were still differences in selected hospital characteristics, which suggests imbalances in our matching process. This is a common problem in health services research.24 Using additional hospital characteristic measures in the propensity score model might have improved our approach. Furthermore, our models estimated the effects from initial or current Magnet recognition: 48 percent of the Magnet hospitals in the sample had a gap in their recognition. Finally, because of hospital closures and mergers, not all facilities were included in all thirteen study years. Study Results Of the 1,897,014 patients, 839,802 (44.3 percent) were treated in Magnet hospitals (Exhibit 1). The two groups were similar in terms of age, number of comorbid conditions, and sex distribution. The unadjusted overall thirty-day mortality rate was 6.1 percent. In the subset of patients who experienced a postoperative complication (669,158; 35.3 percent of the sample), the unadjusted failure-to-rescue rate was 12.0 percent. Of the 993 hospitals studied, 331 (33.3 percent of the analytic sample) were recognized as Magnet hospitals at any time during the study period. Magnet hospitals were larger than non-Magnet hospitals (median staffed beds: 421 versus 371 beds), and a larger share of Magnet hospitals had transplant programs (29.3 percent versus 18.7 percent; Exhibit 1). Compared to non-Magnet hospitals, Magnet facilities had better nurse staffing (in terms of adjusted registered nurse hours per patient day) and were less likely to be in an urban location and to have a teaching program. However, these differences were not significant. Magnet Recognition And Patient Mortality Thirty-day mortality rates were significantly lower in Magnet hospitals than in matched controls (5.8 percent versus 6.3 percent; Exhibit 2). A similar difference was observed for failure to rescue. In multivariable analyses, after controlling for patient and hospital characteristics, we found that patients treated in Magnet hospitals were 7.7 percent less likely to experience thirty-day mortality than patients treated in non-Magnet hospitals (95% confidence interval: 0.89, 0.96). And patients treated in Magnet hospitals were 8.6 percent less likely to die after a postoperative complication than patients treated in matched control hospitals (95% CI: 0.88, 0.95). Hospital Outcomes Over Time With the exception of two years, Magnet hospitals had lower thirty-day mortality and failure-to-rescue rates than matched control hospitals (Exhibit 3). The rates of thirty-day mortality and failure to rescue for Magnet hospitals were higher than those for matched controls in 2009 and 2008, Exhibit 1 Characteristics Of Patients And Hospitals In The Study, By Magnet Recognition Status Characteristic Patients All Age (years)**** 65–69 70–74 75–79 80–84 85 and older Number of comorbid conditions**** 0 1 2 or more Sex**** Male Female Operation**** Coronary artery bypass graft surgery Colectomy Lower extremity bypass Hospitals Urban location* No Yes Teaching program No Yes Transplant program**** No Yes Staffed beds (median)** Adjusted RN hours/patient daya (median) Cost-to-charge ratio (median)** Outpatient share (median) Magnet hospitals (n = 331) Non-Magnet matched control hospitals (n = 662) 44.3% 55.7% 24.2 26.2 24.8 16.0 8.8 24.7 26.2 24.5 15.8 8.8 11.1 28.3 60.6 10.8 27.7 61.5 58.1 41.9 56.8 43.2 55.0 29.2 15.8 53.7 30.1 16.2 8.2% 91.8 5.0% 95.0 31.4 68.6 29.0 71.0 70.7 29.3 421 7.10 0.34 0.36 81.3 18.7 371 6.74 0.33 0.35 SOURCE Authors’ analysis of 1998–2010 Medicare data. NOTES We used chi-square tests to compare categorical variables and the Wilcoxon two-sample test to compare continuous variables. Asterisks displayed with category labels show the results. Outpatient share and cost-to-charge ratio are defined in the text. RN is registered nurse. aAdjusted for hospital’s outpatient share. *p
Purchase answer to see full attachment
Explanation & Answer:
300 Words
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

View attached explanation and answer. Let me know if you have any questions.

Running head: PERFORMANCE IMPROVEMENT
1

MMHA 6700: Healthcare Operations Management

Name
Affiliation
Date

PERFORMANCE IMPROVEMENT
2
Introduction
High reliability usually means having appropriate strategies and services that can be
maintained over long periods. This may include some aspects, such as limiting or eliminating
major quality failures that do not exist in healthcare facilities. It is important to understand that
leadership commitment usually constitutes High reliability. Hence, this study focuses on
reexamining the steps for improvement of high r...

Related Tags