Lake Sumter State College Mixed Methods Research Responses

User Generated

ngubzn35

Health Medical

Lake Sumter State College

Description

Unformatted Attachment Preview

Chapter 12 Understanding Mixed Methods Research, Quality Improvement, and Other Special Types of Research Mixed Methods Research ❖Research that integrates quantitative and qualitative data and strategies in a single study or coordinated clusters of studies ❖Many areas of inquiry can be enriched by triangulating quantitative and qualitative data; some questions require mixed methods: pragmatism. ❖Advantages o Complementarity o Practicality o Enhanced validity Copyright © 2022 Wolters Kluwer. All Rights Reserved. Purposes and Applications of Mixed Methods Research ❖Instrument development ❖Intervention development ❖Hypothesis generation and testing ❖Explication Copyright © 2022 Wolters Kluwer. All Rights Reserved. Mixed Methods Designs and Strategies ❖Concurrent versus sequential approaches o Concurrent—Qualitative and quantitative data are collected at the same time. o Sequential—Qualitative and quantitative data are collected in phases. ❖Morse notations system o QUAL/quan, QUAN/qual, QUAL/QUAN Copyright © 2022 Wolters Kluwer. All Rights Reserved. Specific Mixed Methods Designs ❖ Convergent parallel design: obtain different, but complementary, data about the central phenomenon under study—i.e., to triangulate data sources ❖ Explanatory design: sequential designs with quantitative data collected in the first phase, followed by qualitative data collected in the second phase ❖ Exploratory design: sequential MM designs, with qualitative data being collected first Copyright © 2022 Wolters Kluwer. All Rights Reserved. Quality Improvement #1 ❖QI involves assessments of a problem in patient care with the aim of improving clinical care and patient outcomes within a health care organization to develop improvement science. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Quality Improvement #2 ❖ Interventions include the following: o Provider education (teaching health care teams how best to manage situations) o Provider reminders (providing decision support materials to prompt health care professionals to undertake some action) o Patient education (increasing patient’s understanding of a prevention or treatment strategy) o Patient reminders (reminding patients to keep appointments or adhere to regimens) o Structural changes (creating care coordination or case management systems) Copyright © 2022 Wolters Kluwer. All Rights Reserved. Quality Improvement Planning Tools ❖Root cause analysis (RCA) o “5 Whys” Copyright © 2022 Wolters Kluwer. All Rights Reserved. Quality Improvement Approaches ❖Plan-Do-Study-Act (PDSA) o Plan: develop strategies or interventions o Do: implement interventions and collect data o Study/Check: run analysis on collected data o Act: dissemination of outcome results as appropriate for practice changes Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #1 Which type of research involves an intervention? a. Survey research b. Clinical trials c. Secondary analyses d. Methodological research Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #1 b. Clinical trials Rationale: Studies that involve an intervention include clinical trials, evaluation research, and nursing intervention research. Outcomes research, surveys, secondary analyses, and methodological research do not involve an intervention. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Clinical Trials ❖Studies that develop clinical interventions and test their efficacy and effectiveness ❖Undertaken to evaluate an innovative therapy or drug are often designed in a series of phases Copyright © 2022 Wolters Kluwer. All Rights Reserved. Phases of a Full Clinical Trial ❖Phase I: designed to establish safety, tolerance, and dose ❖Phase II: seeks preliminary evidence of effectiveness—a pilot test often using a quasiexperimental design ❖Phase III: fully tests the efficacy of the treatment via a randomized clinical trial (RCT), often in multiple sites; sometimes called an efficacy study ❖Phase IV: focuses on external validity (effectiveness) of an intervention in the general population; emphasis on generalizability Copyright © 2022 Wolters Kluwer. All Rights Reserved. Practical Clinical Trials ❖Emphasis on EBP has led to a call for studies that bridge the gap between tightly controlled efficacy studies and subsequent effectiveness studies. ❖Practical clinical trials (or pragmatic clinical trials) help in making decisions in real-world applications. ❖Pragmatism is a paradigm often associated with MM research, which provides a basis for a position that has been stated as the “dictatorship of the research question.” Copyright © 2022 Wolters Kluwer. All Rights Reserved. Evaluation Research ❖Examines how well a specific program, practice, procedure, or policy is working o Process analysis ▪ Often undertaken to obtain descriptive information about the process by which a program gets implemented and how it actually functions o Economic analysis ▪ Assess whether program benefits outweigh its monetary costs Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #2 During which phase of a full clinical trial would an efficacy study be done? a. Phase I b. Phase II c. Phase III d. Phase IV Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #2 c. Phase III Rationale: Phase III fully tests the efficacy of the treatment via a randomized clinical trial (RCT), often in multiple sites; this phase is sometimes called an efficacy study. Phase I finalizes the intervention; phase II seeks preliminary evidence of effectiveness, usually via a pilot test; and phase IV focuses on longterm consequences of the intervention and on generalizability (sometimes called an effectiveness study). Copyright © 2022 Wolters Kluwer. All Rights Reserved. Nursing Intervention Research ❖Describes an approach distinguished by a distinctive process of planning, developing, and testing interventions—especially complex interventions ❖Several phases o Basic developmental research o Pilot research o Evaluation of intervention efficacy o Implementation research Copyright © 2022 Wolters Kluwer. All Rights Reserved. Comparative Effectiveness Research ❖CER o Involves direct comparisons of two or more health interventions o Seeks insights into which intervention works best for which patients o Sometimes referred to as patient-centered outcomes research Copyright © 2022 Wolters Kluwer. All Rights Reserved. Health Services Research ❖Designed to document the quality and effectiveness of health care and nursing services ❖Often focuses on parts of a health care quality model developed by Donabedian; key concepts: o Structure of care (e.g., nursing skill mix) o Processes (e.g., clinical decision making) o Outcomes (end results of patient care) Copyright © 2022 Wolters Kluwer. All Rights Reserved. Outcomes Research ❖Subset of health services research ❖Comprises efforts to understand the end results of particular health care practices and to assess the effectiveness of health care services ❖Represents a response to the increasing demand from policy makers and the public to justify care practices in terms of improved patient outcomes and costs Copyright © 2022 Wolters Kluwer. All Rights Reserved. Survey Research #1 ❖Obtains quantitative information (via self-reports) on the prevalence, distribution, and interrelations of variables in a population ❖Used primarily in correlational studies and to gather information from nonclinical populations ❖Secures information about people’s actions, intentions, knowledge, characteristics, opinions, and attitudes ❖May be cross-sectional or longitudinal Copyright © 2022 Wolters Kluwer. All Rights Reserved. Survey Research #2 ❖Modes of collecting survey data o Personal (face-to-face) interviews o Telephone interviews o Self-administered questionnaires ▪ Distributed by mail or the Internet ❖Personal interviews tend to yield the highest quality data but are very expensive. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #3 Tell whether the following statement is True or False. Telephone interviews provide the best quality data for survey research. a. True b. False Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #3 a. b. False Rationale: Personal interviews used with survey research tend to provide the highest quality data, but they are very expensive. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Other Special Types of Research ❖Intervention research o Clinical trials; evaluation research; nursing intervention research ❖Health services and outcomes research ❖Survey research ❖Quality improvement studies ❖Secondary analysis ❖Delphi surveys ❖Methodological research Copyright © 2022 Wolters Kluwer. All Rights Reserved. Secondary Analysis ❖Study that uses previously gathered data to address new questions ❖Can be undertaken with qualitative or quantitative data ❖Cost-effective; data collection is expensive and time-consuming. ❖Secondary analyst may not be aware of data quality problems and typically faces “if only” issues (e.g., if only there was a measure of X in the dataset). Copyright © 2022 Wolters Kluwer. All Rights Reserved. Delphi Surveys ❖Developed as a tool for short-term forecasting ❖The technique involves a panel of experts who are asked to complete several rounds of questionnaires focusing on their judgments about a topic of interest. ❖Multiple iterations are used to achieve consensus. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Methodological Research ❖Studies that focus on development, validation, and evaluation of research tools and instruments ❖Can involve qualitative or quantitative data ❖Examples o Developing and testing a new data collection instrument o Testing the effectiveness of stipends in facilitating recruitment Copyright © 2022 Wolters Kluwer. All Rights Reserved. Critical Appraisals Copyright © 2022 Wolters Kluwer. All Rights Reserved. Chapter 17 Learning From Systematic Reviews Research Integration and Synthesis #1 ❖The systematic and rigorous integration and synthesis of evidence is a cornerstone of evidence-based practice (EBP). ❖Impossible to develop “best practice” guidelines, protocols, and procedures without organizing and evaluating research evidence through a systematic review Copyright © 2022 Wolters Kluwer. All Rights Reserved. Research Integration and Synthesis #2 ❖Forms of systematic reviews o Narrative, qualitative integration (traditional review of quantitative or qualitative results) o Meta-analysis (statistical integration of results used to compute common effect size) o Metasynthesis (theoretical integration and interpretation of qualitative findings) Copyright © 2022 Wolters Kluwer. All Rights Reserved. Meta-Analysis: Advantages ❖Objectivity—statistical integration eliminates bias in drawing conclusions when results in different studies are at odds. ❖Increased power—reduces the risk of a Type II error compared to a single study ❖Despite these advantages, meta-analysis is not always appropriate. Indiscriminate use has led critics to warn against potential abuses. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Criteria for Using Meta-Analysis in Systematic Review ❖Must decide whether statistical integration is suitable ❖Research question or hypothesis should be essentially identical across studies. o Avoid the “fruit” problem—don’t combine apples and oranges! ❖Must have a sufficient knowledge base—must be enough studies of acceptable quality ❖Results can be varied but not totally at odds. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #1 Tell whether the following statement is True or False. An advantage of meta-analysis is the conclusions arrived at. a. True b. False Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #1 a. True Rationale: Meta-analysis offers a simple advantage as an integration method: objectivity. Readers of a meta-analysis can be confident that another analyst using the same data set and analytic decisions would reach the same conclusions. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Steps in a Meta-Analysis #1 ❖Problem formulation: Delineate research question or hypothesis to be tested. ❖Design of meta-analysis: Identify sampling criteria for studies to be included. ❖Search for evidence in literature: Develop and implement a search strategy. ❖Evaluation of study quality: Locate and screen sample of studies meeting the criteria. ❖Extraction and encoding data for analysis: Create a dataset. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Steps in a Meta-Analysis #2 ❖Calculation of effects: Calculate an effect size (ES) index. ❖Data analysis: Determine the weighted average. ❖Assessment of degree of confidence: GRADE Copyright © 2022 Wolters Kluwer. All Rights Reserved. Search Strategy ❖Identify electronic databases to use. ❖Identify additional search strategies (e.g., ancestry approach). ❖Decide whether or not to pursue the grey literature (unpublished reports). ❖Avoid publication bias. ❖Identify keywords for the search. • Think creatively and broadly. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Approaches to Evaluating Study Quality #1 ❖Meta-analysts make decisions on handling study quality. ❖Approaches o Omit low-quality studies (e.g., in intervention studies, non-RCTs) o Give more weight to high-quality studies. o Analyze low- and high-quality studies to see if effects differ (sensitivity analyses). Copyright © 2022 Wolters Kluwer. All Rights Reserved. Approaches to Evaluating Study Quality #2 ❖Evaluations of study quality can use: o A scale approach (e.g., use a formal instrument to “score” overall quality) o A component approach (code whether certain methodological features were present or not, e.g., randomization, blinding, low attrition) Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #2 After identifying the research question to be tested for a meta-analysis, what task would the researcher complete next? a. Develop a search strategy. b. Locate sample of studies. c. Identify sampling criteria. d. Extract data from reports. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #2 c. Identify sampling criteria. Rationale: Once the research question has been delineated, the next step is to identify sampling criteria for studies to be included. Then, the researcher develops and implements a search strategy, locating and screening the sample of studies that meet the criteria. Next, the researcher appraises the quality of the study evidence and extracts and records data from the reports. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Analytic Decisions in Meta-Analysis ❖What effect size index will be used? ❖How will heterogeneity be assessed? ❖Which analytic model will be used? ❖Will there be subgroup (moderator) analyses? ❖How will quality be addressed? ❖Will publication bias be assessed? Copyright © 2022 Wolters Kluwer. All Rights Reserved. Heterogeneity #1 ❖Results (effects) inevitably vary from one study to the next. ❖Major question: Is heterogeneity just random fluctuations? o If “yes,” then a fixed effects model of analysis can be used. o If “no,” then a random effects model should be used. ❖Heterogeneity can be formally tested but also can be assessed visually via a forest plot. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Heterogeneity #2 ❖Factors influencing variation in effects are usually explored via subgroup analysis (moderator analysis). ❖Do variations relate to: o Participant characteristics (e.g., men vs. women)? o Methods (e.g., RCTs vs. quasi-experiments)? o Intervention characteristics (e.g., 3-week vs. 6week intervention)? Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #3 Tell whether the following statement is True or False. A key component of meta-analysis is the calculation of an effect size index. a. True b. False Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #3 a. True Rationale: An effect size index is a central feature of meta-analysis. It is computed for each study and then combined and averaged. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Metasynthesis ❖Terminology relating to qualitative integration is diverse and complex. ❖Metasynthesis is an umbrella term, broadly representing “a family of methodological approaches to developing new knowledge based on rigorous analysis of existing qualitative research findings.” ❖Metasynthesis is not a literature review. ❖Integrations that are more than the sum of the parts—novel interpretations of integrated findings Copyright © 2022 Wolters Kluwer. All Rights Reserved. Metasynthesis: Steps ❖Similar to meta-analysis in many ways o Formulate problem. o Decide on design: selection criteria, search strategy. o Search for data in the literature. o Evaluate study quality. o Extract data for analysis. o Perform data analysis and interpretation. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Metasynthesis Approaches #1 ❖Noblit and Hare (developed an approach for a meta-ethnography) o Suggests a seven-phase approach o Involves “translating” findings from qualitative studies into one another o “An adequate translation maintains the central metaphors and/or concepts of each account.” o Final step is synthesizing the translations. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Metasynthesis Approaches #2 ❖Sandelowski and Barroso’s approach distinguishes studies that are summaries (no conceptual reframing) and syntheses (studies involving interpretation and metaphorical reframing). ❖Both summaries and syntheses can be used in a meta-summary, which can lay a foundation for a metasynthesis. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Question #4 Tell whether the following statement is True or False. According to Sandelowski and Barroso, a metasummary lays the foundation for meta-analysis. a. True b. False Copyright © 2022 Wolters Kluwer. All Rights Reserved. Answer to Question #4 b. False Rationale: A meta-summary, as described by Sandelowski and Barroso, lays the foundation for a metasynthesis. Copyright © 2022 Wolters Kluwer. All Rights Reserved. Meta-Summaries ❖Involve making an inventory of findings and can be aided by computing manifest effect sizes (effect sizes calculated from the manifest content in the studies in the review) ❖Two types o Frequency effect size o Intensity effect size Copyright © 2022 Wolters Kluwer. All Rights Reserved. Effect Sizes in Meta-Summaries ❖Frequency effect size o Count the total number of findings across all studies in the review (specific themes or categories). o Compute prevalence of each theme across all reports (e.g., the #1 theme was present in 75% of reports). ❖Intensity effect size o For each report, compute how many of the total themes are included (e.g., report 1 had 60% of all themes identified). Copyright © 2022 Wolters Kluwer. All Rights Reserved. Sandelowski and Barroso’s Metasynthesis ❖Can build on a meta-summary ❖But can only be done with studies that are syntheses (not summaries) because the purpose is to offer novel interpretations of interpretive findings—not just summaries of findings Copyright © 2022 Wolters Kluwer. All Rights Reserved. Supplement for Chapter 12 Other Specifc Types of Research This supplement offers expanded descriptions of secondary analyses, Delphi surveys, and methodological research. Although these types most often involve quantitative methods, each can involve qualitative methods or mixed (quantitative and qualitative) methods. SECONDARY ANALYSIS When researchers complete a study, their collective data comprise a data set. Secondary analysis involves using an existing data set from a previous study or another source to test new hypotheses or to answer new questions. Secondary Analysis in Quantitative Research In many quantitative studies, researchers collect far more data than are actually analyzed, thus providing opportunities for further exploration. Secondary analysis of existing data is effcient because data collection is often the most time-consuming and expensive part of a study. TIP Research that relies on existing records (e.g., hospital records) is not usually referred to as a secondary analysis because the records’ data were not gathered for research purposes. In some cases, a secondary analysis involves examining previously unanalyzed relationships among variables in the data set. For example, an independent variable in the original study could be the outcome variable in a secondary analysis, or vice versa. In large national surveys, description is often a key goal (e.g., describing the prevalence of certain behaviors or characteristics), but secondary analysts can proftably examine numerous interrelationships among the variables in such data sets. Secondary analyses may also focus on a subgroup of the full sample (e.g., focusing only on the smokers in a national data set of the general public). Nurse researchers have undertaken secondary analyses with both large national data sets with thousands of study participants and with smaller, localized data sets, including data from clinical trials. Many graduate students and junior faculty take advantage of existing data sets from mentors or colleagues. In such cases, the secondary analysis is sometimes called a substudy of a larger parent study. The use of available data from large studies makes it possible to bypass time-consuming and costly steps in the research process. Moreover, if a data set with a large sample is used, there are advantages in terms of statistical power and statistical conclusion validity. Another advantage is that large data sets often involve samples that are geographically, ethnically, and economically diverse, thus enhancing external validity. An argument can also be made that 1 2 Supplement for Chapter 12 Other Specifc Types of Research secondary analysis is an ethically strong approach because it enhances the value of the data from people who volunteered their time. Nevertheless, there are some noteworthy disadvantages in working with existing data. In particular, the chances are fairly high that the data set will be defcient in some way, such as in the variables measured, the specifc methods used to measure key variables, or the composition of the sample (e.g., undesirable exclusions). Secondary analysts may face many “if only” problems: if only the original researchers had asked certain questions or if only they had measured a particular variable differently. Another concern is whether the data collected in the original study have become “dated.” In our rapidly changing world, it may not be reasonable to assume that people’s behavior, attitudes, and circumstances have not changed. Thus, secondary analysts may struggle to justify the use of an existing data set to examine the questions in which they are interested. Example of a secondary analysis from a clinical trial Kolanowski and colleagues (2020) conducted a secondary analysis of baseline data from a randomized trial. The trial is testing a strategy in nursing homes for improving staff uptake of behavioral approaches to respond to behavioral symptoms among residents with dementia. In the secondary analysis, the researchers examined the residents’ baseline affect balance (the ratio of positive to negative affect) in relation to such staff-related variables as staff interaction during caretaking, staff knowledge of person-centered approaches for dementia care, staff hours of care, and the physical environment. The researchers used data from 325 nursing home residents living in 35 nursing homes. Secondary Analysis in Qualitative Research Qualitative researchers have also come to recognize the value of secondary analysis of narrative data sets. In many cases, a secondary analysis of a qualitative data set is undertaken by the same researchers who conducted the original study. Thorne (2013), who identifed several types of qualitative secondary analysis, warned of potential problems in qualitative secondary analysis. One of the main issues focuses on ethical complications that need to be worked out. Most important is the issue of informed consent. Did the original informed consent include consent for reuse of the data? Another ethical issue concerns confdentiality. A researcher doing the secondary data analysis may unintentionally violate shared understanding of the boundaries of confdentiality that were clearly understood by the primary researcher. The suitability of the available data set for new research questions is yet another issue that needs to be considered. If a researcher has permission to use another researcher’s primary data set, how much of it will they have access to? For example, if they have access to transcribed interviews, will they also be able to listen to the original audiorecordings? Thus, while a secondary analysis of a qualitative data set is appealing because it is effcient, several important issues must be resolved. Example of a secondary analysis of a qualitative data set Beck (2020), the second author of this book, conducted a secondary analysis of three qualitative data sets that focused on traumatic stress in maternal–newborn nurses. In the original studies, all of which had been conducted by Beck and colleagues, the data had been analyzed for important themes. In the secondary analysis, the four categories of posttraumatic stress disorder (PTSD) symptoms were used to analyze the data using content analysis. The four symptoms were intrusion, avoidance, arousal, and negative alterations in cognitions and mood. For the nurses in all three studies, the intrusions category was ranked frst. Supplement for Chapter 12 Other Specifc Types of Research 3 DELPHI SURVEYS Delphi surveys were developed as a tool for short-term forecasting. The technique involves a panel of experts who are asked to complete several rounds of questionnaires focusing on their judgments about a topic of interest. Multiple iterations are used to achieve consensus, without requiring face-to-face discussion. Responses to each round of questionnaires are analyzed, summarized, and returned to the experts with a new questionnaire. The experts can then reformulate their opinions with the panel’s viewpoint in mind. The process is usually repeated at least three times until a consensus is obtained. A key issue in a Delphi study is to identify an appropriate sample of experts on the topic of interest—and to solicit their cooperation and ongoing commitment over multiple rounds of data collection. The selection of experts is done purposively, often with the goal of obtaining a group whose viewpoints are well informed yet cover a range of perspectives. Participants are almost always geographically (and sometimes culturally) diverse. The number of participants is typically between 15 and 20, but larger groups are sometimes used. Samples that are very small can raise doubts about the value of pooled judgments, but large samples can be unwieldy. A Delphi survey can be challenging in terms of scheduling. Participants must be given suffcient time to respond to each questionnaire, but interest may decline if there are long intervals between the survey rounds. Feedback to participants is often a combination of qualitative information (e.g., comments that are “typical” or insightful) and quantitative information. For example, if the experts have been asked to rate statements on a 5-point scale from strongly disagree to strongly agree, the percentage in each category might be computed and communicated to participants in the subsequent round. The Delphi technique is an effcient means of combining the expertise of a geographically dispersed group. The experts are spared the necessity of attending a formal meeting, thus saving time and expense. Another advantage is that a persuasive or prestigious expert cannot have an undue infuence on opinions, as could happen in a face-to-face situation. All panel members are on an equal footing. Anonymity probably encourages greater candor than might be expressed in a meeting. The Delphi technique is, however, time-consuming. Experts must be solicited, questionnaires prepared and distributed, responses analyzed, results summarized, and new questionnaires sent. Panel members’ cooperation may wane in later rounds, so attrition bias is a potential problem. Example of a Delphi study Chamberlain and colleagues (2020) used a four-round Delphi survey with an expert panel of 19 nurses to develop a new model of early career transition pathways in the specialty of community nursing. The panel of experts comprised a blend of clinical care, specialty community practice, and nursing policy expertise. METHODOLOGICAL STUDIES Methodological studies are investigations focused on developing and improving methods for obtaining high-quality data and conducting rigorous research. The growing demands for sound, reliable outcome measures; for sophisticated procedures for controlling confounding variables; and for research designs that minimize bias have led to an increased interest in methodological research. Many methodological studies in nursing focus on the development and psychometric evaluation of data collection instruments—particularly scales to measure abstract constructs. Suppose, for example, we conducted a study to develop and test a new instrument to measure 4 Supplement for Chapter 12 Other Specifc Types of Research patients’ satisfaction with nursing care. In such a study, the purpose would not be to describe levels of patient satisfaction or to assess the correlation between patient satisfaction and other factors (e.g., staff or patient characteristics). The goal would be to develop a reliable and valid instrument for others to use in clinical or research applications. Example of a psychometric methodological study Kilpatrick and colleagues (2019) undertook a psychometric evaluation to assess the face validity, content validity, construct validity, and internal consistency of a new 41-item scale to measure health care providers’ perceptions of team functioning and team effectiveness in interprofessional teams in acute and primary care. Methodological research is not restricted to instrument development. For example, researchers sometimes use a randomized experimental design to test competing methodological strategies. Suppose we wanted to test whether sending birthday cards to participants reduced rates of attrition in longitudinal studies. Participants could be randomly assigned to a card or no-card condition. The dependent variable would be the rate of attrition from the study. Methodological studies are not always quantitative. When particular challenges in conducting high-quality quantitative research are encountered, researchers sometimes use indepth (qualitative) questioning, or unstructured observations in clinical settings, to better understand what the problems are and what solutions might be feasible. REFERENCES Beck, C. T. (2020). Secondary traumatic stress in maternalnewborn nurses: Secondary qualitative analysis. Journal of the American Psychiatric Nurses Association, 26, 55–64. *Chamberlain, D., Harvey, C., Hegney, D., Tsai, L., McLellan, S., Sobolewska, A., . . . Wake, T. (2020). Facilitating an early career transition pathway to community nursing: A Delphi policy study. Nursing Open, 7, 100–126. *Kilpatrick, K., Paquette, L., Bird, M., Jabbour, M., Carter, N., & Tchouaket, É. (2019). Team functioning and beliefs about team effectiveness in inter-professional teams: Questionnaire development and validation. Journal of Multidisciplinary Healthcare, 12, 827–839. *Kolanowski, A., Behrens, L., Lehman, E., Oravecz, Z., Resnick, B., Boltz, M., . . . Eshraghi K. (2020). Living well with dementia: Factors associated with nursing home residents’ affect balance. Research in Gerontological Nursing, 13, 21–30. Thorne, S. (2013). Secondary qualitative data analysis. In C. T. Beck (Ed.), Routledge international handbook of qualitative nursing research (pp. 393–404). New York, NY: Routledge. *A link to this open-access article is provided in the website. Internet Resources section on Supplement for Chapter 17 Using GRADE in Systematic Reviews As mentioned in the textbook, review teams that undertake quantitative systematic reviews— especially reviews addressing Therapy/intervention questions—are increasingly likely to evaluate how much confdence can be placed in the review fndings. Whether the fndings suggest that an intervention had a substantial effect on an important outcome (e.g., a d = .60) or virtually no effect (e.g., a d = .10), it is important to understand whether the evidence supporting that fnding is strong or weak. Numerous organizations, including the Cochrane Collaboration and the Joanna Briggs Institute (JBI), have adopted the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach to grading the quality of evidence (Guyatt et al., 2008, 2011). TIP GRADE was developed initially for evaluating the quality of a body of evidence addressing Therapy questions, but guidelines for grading other types of studies have been developed, such as evidence from prognosis studies (Iorio et al., 2015), economic evaluations (Brunetti et al., 2013), and diagnostic test assessments (Schünemann et al., 2008). GRADE ratings are done on an outcome-by-outcome basis and usually are applied to only a subset of outcomes included in the review—the patient important outcomes that the review team judges to be important to those making decisions about implementing an intervention. Thus, an initial step for the review team is to decide which outcomes from the review will be graded. Although the quality of evidence from primary studies in a review lies on a continuum, the GRADE system involves making a categorical determination of the confdence one can place in the systematic review results—that is, whether confdence in the evidence for a specifed outcome, regardless of the effect size, is High (++++), Moderate (+++), Low (++), or Very low (+). The second column of Table 1 (“Quality of Evidence”) shows GRADE’s description of each of these classifcations. A rating of High, for example, corresponds to high confdence that the true effect is close to the effect estimated in the review. 1 2 Supplement for Chapter 17 Using GRADE in Systematic Reviews TABLE 1 GRADE Scoring for the Quality of Evidence for the Effect of an Intervention on a Specifc Outcomea Study Design Quality of Evidence RCTs (Start at 4 points, High.) High (1111): We are very confdent that the true effect lies close to that of the estimate of the effect. Downgraded RCTs or upgraded observational studies Observational studies or doubledowngraded RCTs Downgraded studies of all types of design Subtract Points if Risk of bias: −1 serious risk −2 very serious risk Inconsistent results: −1 serious concern Moderate (111): We are −2 very serious moderately confdent in concern the effect estimate: The Indirectness of true effect is likely to be evidence: close to the estimate of −1 serious concern the effect, but there is a −2 very serious possibility that it is subconcern stantially different. Imprecision Low (11): Our confdence (wide CIs): in the effect estimate is −1 serious concern limited: The true effect −2 very serious may be substantially difconcern ferent from the estimate Publication bias: of the effect. −1 likely −2 very likely Very low (1): We have very little confdence in the effect estimate: The true effect is likely to be substantially different from the estimate of effect. Add Points if Large magnitude of effect: +1 large +2 very large Dose-response gradient: +1 evidence of a gradient All plausible confounding: would reduce a demonstrated effect or would suggest a spurious effect when results show no effect +1 CI, confdence interval. a This table is a composite/adaptation of several tables created by the GRADE group. The review team begins by assigning an a priori “High” score (corresponding to 4 points) if the review integrated fndings from randomized controlled trials (RCTs) (column 1 of Table 1) or a “Low” score (2 points) if the primary studies were observational (nonexperimental). Evidence can be downgraded based on assessments of fve criteria (column 3 of Table 1): ● ● ● ● ● Risk of bias. Important limitations in RCTs include lack of allocation concealment, lack of blinding, loss to follow-up, and selective outcome reporting bias. Key limitations in observational studies include failure to control confounders, fawed measurement of exposure and outcome, and faulty eligibility criteria. Reviewers make a judgment about risk of bias (low, serious, very serious) across all studies in the review for the specifed outcome. Inconsistent results. Results are inconsistent if point estimates vary widely across studies, the confdence intervals for studies show minimal overlap, or the test for heterogeneity is signifcant. Indirectness of evidence. Direct evidence comes from studies that directly compare interventions of interest on outcomes of interest (e.g., not surrogate outcomes) in populations of direct interest (e.g., population of interest is people in long-term care but evidence is from hospital patients). Imprecision. Wide confdence intervals for the specifed outcome for studies in the review (usually because of small sample size) may result in downgrading the rating. Publication bias. Publication bias can result in substantial overestimates of effects, so a body of evidence can be downgraded if the risk of publication bias is likely. Supplement for Chapter 17 Using GRADE in Systematic Reviews 3 The rating for the evidence for a specifc outcome in non-RCTs can be upgraded under three circumstances (column 4 of Table 1): ● ● ● Large magnitude of effect. Confdence in evidence from observational studies can be upgraded when an effect is so large that the biases common to nonexperimental studies could not account for the magnitude of the effect. Dose-response gradient. Confdence is enhanced when the effect is proportional to the degree of exposure. All plausible confounding. The score can be upgraded when possible confounders would probably diminish the observed effect, and so the actual effect likely is larger than the calculated effect size suggests. The use of GRADE inevitably involves subjective judgments. For example, would a review of 10 RCTs be downgraded from High to Moderate if only 2 studies did not use blinding? The developers of GRADE acknowledge the need to “the need to view the body of evidence as a whole in scoring” in scoring (Guyatt et al., 2011). Evaluating confdence in the evidence using GRADE is not completely objective, but what it offers is transparency, requiring reviewers to explicitly provide a rationale for grading decisions. Systematic reviewers who apply the GRADE approach often use software called GRADEpro, which generates two types of tables. The frst is an evidence profle that provides detailed information about the grading judgments for each outcome. The rows of evidence profles indicate each outcome that has been graded. Columns correspond to features of the scoring—such as risk of bias, inconsistency, and so on. The entries in the cells explain why any downgrading occurred. An example of an evidence profle for a fctitious systematic review is presented in Table 2. GRADEpro can also produce Summary of Findings tables. These tables show, for each outcome (shown in the rows), the results of the meta-analysis, number of participants and the number of studies on which the effect size was based, and then the GRADE score. Example of using GRADE Milazi and colleagues (2017) did a systematic review of the effectiveness of educational or behavioral interventions on adherence to phosphate control in adults receiving hemodialysis. The review integrated the evidence for several outcomes, one of which was rated using GRADE. A meta-analysis was performed for eight RCTs that compared intervention receipt to standard care with regard to serum phosphate levels.Their Summary of Findings Table (Table 3 in this supplement) showed a signifcant beneft for those in an intervention group; confdence in the effect size was Moderate. The footnote at the bottom of their table indicated why evidence from the studies, all of which were RCTs, was downgraded: four of the eight studies had potential problems with blinding and allocation concealment. As mentioned in the textbook, a working group at JBI developed a system to rate confdence in the synthesized fndings of a qualitative evidence synthesis. Their ConQual approach also involves ratings on a scale from 4 (high) to 1 (very low)—summarizing the reviewers’ confdence in each fnding. A separate effort was undertaken by a group working with GRADE to develop a method to rate confdence in the fndings from a qualitative synthesis. A seven-paper series was published in the journal Systematic Reviews in 2018 to explain the GRADE-CERQual (Confdence in the Evidence from Reviews of Qualitative Research) (see Lewin et al., 2018). The GRADE-CERQual approach considers four aspects for each fnding: methodological limitations, coherence, data adequacy, and relevance. Publication bias also plays a role. The system yields a Summary of Qualitative Findings table and an evidence profle. Like GRADE, there are four levels of confdence: High, Moderate, Low, and Very Low confdence. ConQual and CERQual have adopted similar rankings, but the criteria for scoring in the two systems are different. 4 Limitations Inconsistency Indirectness Imprecision All randomized trials Seriousa No serious inconsistency (I2 = 43%) No serious indirectness No serious imprecision All randomized trials No serious limitations No serious inconsistency (I2 = 47%) No serious indirectness Seriousc Not seriousd Not seriousb Publication Bias 179 254 181 249 Sham Massage Treatment Number of Patients CI, confdence interval; SMD, standardized mean difference. a One trial had high risk of bias (unsure of concealment; no blinding); three trials with low risk of bias. b Egger’s test (p = .25). c Only one study with a wide CI. d Egger’s test (p = .36). 3 Pain Intensity—Long-Term Follow-Up (Scores: Better Indicated by LOWER Score) 4 Pain Intensity—Short-Term Follow-Up (Scores: Better Indicated by LOWER Score) No. of Studies Design Quality Assessment — — SMD −0.47 (−.91 to −0.01) SMD −0.72 (−1.25 to −0.48) Relative (95% CI) Absolute Effect Summary of Findings TABLE 2 Example of a GRADE Evidence Profle From a Fictitious Meta-Analysis of the Effect of Massage on Pain MODERATE +++ MODERATE +++ Quality Important Important Importance n = 408 n = 382 n = 790 (8 RCTs) d = −0.23 mmol/L 95% CI (−0.37, −0.08) Mean Differencea Z = 3.01, p = .003 Standard Care Test for Overall Effect + + + MODERATEb Total Quality of the Evidence (GRADE) CI, confdence interval; d, mean difference; Z, Z-score. Reprinted with permission from Milazi, M., Bonner, A., & Douglas, C. (2017). Effectiveness of educational or behavioral interventions on adherence to phosphate control in adults receiving hemodialysis: A systematic review. JBI Database of Systematic Reviews and Implementation Reports, 15(4), 971–1010. a Mean difference in serum phosphate level is expressed as intervention group minus standard care group. b No explanation was given on blinding of data collectors and allocation concealment in four studies. Serum phosphate level Outcome Education or Behavioral Intervention Number of Participants Patient or population: adults receiving hemodialysis Intervention: educational or behavioral interventions Comparison: standard care Educational or behavioral interventions compared to standard care for adherence to phosphate control in adults receiving hemodialysis TABLE 3 Summary of Findings Table Supplement for Chapter 17 Using GRADE in Systematic Reviews 5 6 Supplement for Chapter 17 Using GRADE in Systematic Reviews REFERENCES Brunetti, M., Shemilt, I., Pregno, S., Vale, L., Oxman, A., Lord, J., . . . Schünemann, H. (2013). GRADE guidelines: 10. Considering resource use and rating the quality of economic evidence. Journal of Clinical Epidemiology, 66, 140–150. *Guyatt, G. H., Oxman, A. D., Akl, E., Kunz, R., Vist, G., Brozek, J., . . . Schünemann, H. (2011). GRADE guidelines: 1. Introduction-GRADE evidence profles and summary of fndings tables. Journal of Clinical Epidemiology, 64, 383–394. *Guyatt, G. H., Oxman, A. D., Vist, G., Kunz, R., FalckYtter, Y., Alonso-Coello, P., & Schünemann, H. (2008). GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ, 336, 924–926. Iorio, A., Spencer, F., Falavigna, M., Alba, C., Lang, E., Burnard, B., . . . Guyatt, G. (2015). Use of GRADE for assessment of evidence about prognosis: Rating confdence in estimates of event rates in broad categories of patients. BMJ, 350, h870. *Lewin, S., Booth, A., Glenton, C., Munthe-Kaas, H., Rashidian, A., Wainwright, M., . . . Noyes, J. (2018). Applying GRADE-CERQual to qualitative evidence synthesis fndings: Introduction to the series. Implementation Science, 13(Suppl. 1), 2. Milazi, M., Bonner, A., & Douglas, C. (2017). Effectiveness of educational or behavioral interventions on adherence to phosphate control in adults receiving hemodialysis: A systematic review. JBI Database of Systematic Reviews and Implementation Reports, 15(4), 971–1010. *Schünemann, H., Oxman, A., Brozek, J., Glasziou, P., Jaeschke, R., Vist, G., . . . Guyatt, G. (2008). GRADE: Grading quality of evidence and strength of recommendations for diagnostic tests and strategies. BMJ, 336, 1106–1110. *A link to this open-access journal article is provided website. in the Internet Resources section on
Purchase answer to see full attachment
Explanation & Answer:
2 Responses 250 Words Each
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

View attached explanation and answer. Let me know if you have any questions.

Running head: REPLIES

1

Replies

Name
Affiliation
Date

REPLIES

2

Reply 1
MacKenzie Bowron, how are you? I agree that mixed methods research occurs when
qualitative and quantitative data are combined in one study. Both qualitative and quantitative
research methodologies are used in mixed methods research. Researchers can discover
unexplained discoveries and provide answers to these problems by combining quantitative and
qualitative research methodologies. In a mixed-methods study, quantitative and qualitative data
are collected, analyzed, and mixed. The primary concept is to combine quantitative and
qualitative approaches to better understand the problem than either approach alone can provide.
For instance, The effect of relationship length, quality, and commitment on penetration
frequently is a topic that could be ...


Anonymous
Really helped me to better understand my coursework. Super recommended.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags