Discussion Question 1: Single-Subject Design in Social Work-Practical Examples

User Generated

Gur_Vasnzbhf23

Humanities

SSS 756

Catholic University of America

Description

*****Please read the Article and view the grading rubric in the attachment in order to answer question****

Wong and Vakaharia’s (2012) article speaks to the benefits and limitations of replicating real–world solutions based on practice evaluation techniques learned in class. As detailed by the authors, students were asked to identify a specific problem or goal from which they were to design and implement an intervention using a single–system design (SSD) and then evaluate the outcomes of their various interventions. Topics to be addressed included recurring habits, personal management/organizational concerns, and issues related to other behaviors (e.g., anxiety, depression, etc.).

  • Of the 27 students, 16 of them identified themselves as the subject (re: system) of their interventions. In doing so, how might asking them to measure themselves at baseline (Phase A) and during the intervention (Phase B) bias the study?
  • How might the eight students who selected to work with a loved one bias the study?
  • How might you structure the study differently to minimize these potential biases?
  • Now reflect upon yourself. What example from your personal life do you feel comfortable sharing regarding how you tried, either unsuccessfully or successful, to objectively measure and improve one of your traits? How was your view of yourself biased? In your responses to other discussion posts, focus your attention on colleagues responses to the above question. How might a particular student gain objectivity in his/her self–critique?

Notes to follow from my professor

Please note, initial posts should be between 300 and 500 words. As applicable, relate and support your initial post with what you learned from the course materials for this week and any other resources you want to share (e.g., from prior courses), using standard APA citations. In addition to starting a discussion by making your initial post, you are required to read the postings of fellow students and respond to at least two other initial posts. Your responses should elaborate on the initial postings of your colleagues (e.g., exchange ideas, debate points, and/or add to their ideas) and should be approximately 150 words each. Grading Rubrics for this discussion are in the following document: SSS 756 Discussion Board Rubric.pdf

Unformatted Attachment Preview

Invited Article Teaching Research and Practice Evaluation Skills to Graduate Social Work Students Research on Social Work Practice 22(6) 714-718 ª The Author(s) 2012 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1049731512451060 http://rsw.sagepub.com Stephen E. Wong1 and Sheila P. Vakharia1 Abstract Objective: The authors examined outcomes of a graduate course on evaluating social work practice that required students to use published research, quantitative measures, and single-system designs in a simulated practice evaluation project. Method: Practice evaluation projects from a typical class were analyzed for the number of research references cited, type of client, goals or problems, measures, interventions, single-system designs, and outcomes. Results: More than half of the students conducted selfimprovement projects monitored with self-report measures, and goals or problems selected and interventions applied varied widely. More than 80% of the projects were evaluated with simple AB designs, over 45% of which were associated with statistically significant improvements and an additional 43% showed gains that did not reach statistical significance. Conclusions: Results suggest that students can be taught techniques and skills needed to formulate interventions derived from published research and to evaluate effects of these interventions using single-system designs. Keywords teaching practice evaluation, teaching program evaluation, practice evaluation exercise, graduate social work students, single-case designs, single-subject designs, single-system designs The Council on Social Work Education’s (CSWE) Educational Policy and Accreditation Standards (2008b) enjoins schools of social work to teach students how to evaluate interventions and the outcomes of their practice. The importance of practice evaluation is also reflected in the knowledge and practice behaviors contained in the instructional curriculum of the CSWE’s Advanced Social Work Practice in Clinical Social Work [ca. 2008a]. A well-developed method of program evaluation applicable to clinical practice is single-case or single-system research designs (SSDs). Not requiring large samples of homogenous subjects or random assignment to treatment and control groups, this methodology gauges treatment efficacy with procedures such as introducing and withdrawing, or successively administering treatment across individual problems or clients. Social work textbooks on single-system evaluation first appeared in the 1970s and that content has since infused mainstream social work education (Bloom, Fischer, & Orme, 2009; Grinnell & Unrau, 2011; Noia & Tripodi, 2008; Royse, Thyer, & Padgett, 2010). One educational exercise that can give students direct experience in applying SSDs is a personal self-change project. Barth (1984) assigned social work students self-change projects aimed at improving their professional skills in the practicum setting. Student beliefs and behaviors affecting interactions with clients, agency staff, and persons in the community were monitored during baseline and subsequent intervention phases. Barth reported substantial gains in varied outcomes for several sample self-change projects during the intervention phases suggesting that these projects were effective educational assignments. Anderson (2000) used more conventional selfchange projects focused on personal concerns or desired goals while teaching upper-level undergraduate psychology students. These students selected projects on topics such as improving time and money management, promoting healthy habits, increasing appropriate assertiveness, and decreasing negative self-statements, which they attempted to change with behavioral interventions and evaluated with AB (baseline, intervention) designs. Anderson reported that slightly more than half of these self-change projects reached the student’s goal and one third showed significant progress toward the goal. Similar to the previous study, Morgan (2009) had undergraduate psychology students implement behavior change projects aimed at personal habits such as exercise, caffeine consumption, or study time. Baseline and intervention data from these projects were later graphed and analyzed with statistical process control procedures. Morgan (2009) only described one sample student self-change project that obtained statistically significant improvements, but he reported that its results were ‘‘fairly typical’’ for the class. 1 Florida International University, Miami, FL, USA Corresponding Author: Stephen E. Wong, Florida International University, 11200 S.W. 8th Street, Miami, FL 33199, USA Email: wongse@fiu.edu Wong and Vakharia An alternative assignment to self-change projects is applying evaluative methods in supervised practice with actual clients. In a study involving graduate-level social work students, Dillenburger, Godina, and Burton (1997) taught behavioral principles to students who then applied these techniques with a client in their field placement and evaluated their effects. These authors presented two sample client behavior change projects evaluated within SSDs that appeared to significantly increase clients’ desired behaviors. In an examination of process variables associated with group therapy, Johnson, Beckerman, and Auerbach (2001) instructed students on how to apply AB designs to evaluate effects of encouraging the development of trusting relationships between members on group attendance, and various interventions aimed at increasing verbal participation in a veterans’ support group. Employing statistical tests and software for SSDs, Johnson et al. (2001) found that changes in group processes associated with students’ interventions in these two examples reached statistical significance. This article describes a graduate-level social work course using personal self-change and client behavior change projects to teach students techniques to evaluate their own practice. This class was not a clinical course in behavior therapy, behavior analysis, or any particular treatment approach, although it presented numerous illustrations involving behavioral interventions. In addition, this article reports on multiple measures of student performance for all evaluation projects from one semester, rather than reporting on selected and perhaps unrepresentative student projects. Framework for the Course Assigned Readings and Class Lectures The text for this course was Evaluating Practice: Guidelines for the Accountable Professional (Bloom et al., 2009). The course content and weekly schedule closely followed the organization of the book. The first several chapters of the book present principles and strategies of measurement, and specific assessment procedures and instruments. The next set of chapters discusses uncontrolled case study and controlled single-system designs, and their appropriate applications and limitations. The final chapters of the book cover principles and methods of visual and statistical analyses. Class lectures explained key concepts and procedures, illustrated them with published studies relevant to social work practice, and provided opportunities for questions and discussion. Practice Evaluation Projects (PEPs) The main assignment for teaching students how to formulate and evaluate evidence-based interventions was the PEP. All students were required to conduct a PEP, which incorporated the following four tasks: (a) use the research literature to conceptualize a problem or a goal and to find evidence-supported interventions to improve the condition; (b) conduct an individualized assessment involving quantitative measurement of the above problem or goal; 715 (c) design and apply one or more evidence-supported interventions; and, (d) evaluate effectiveness of the intervention/interventions using a single-system design. PEPs could be based on interventions revolving around a client, a client system, or a self-management project. Client or client system projects aimed at assessing a concern or problem of a person or a social system (e.g., family, small group, and agency), designing and applying an intervention for the problem, and evaluating effects of the intervention. As an alternative to working with an actual client, students were allowed to recruit family members, friends, and acquaintances to participate as voluntary ‘‘clients’’ for these projects. Topics of client or system service plans were significant concerns or goals such as improving parenting skills or staff supervision practices; advancing social, self-care, academic, job-seeking, or recreational skills; or decreasing verbal aggression, marital conflict, child misconduct, or other interpersonal problems. Another type of PEP was a self-management project that assessed a personal goal or problematic behavior, designed and applied an intervention to achieve the goal or to alleviate the problem, and evaluated effects of the intervention. Previous self-management projects were aimed at improving diet, exercise, sleep hygiene, study habits, and time management as well as reducing annoying habits, tics, compulsions, angry outbursts, negative and self-defeating thoughts, smoking, drinking, excessive spending, and other addictive behaviors. Students were required to write a detailed description of their evaluation project using a format for research reports derived from the Journal Article Reporting Standards of the Publication Manual of the American Psychological Association (APA, 2010). Midway through the semester, students submitted a draft of their report and received feedback from the instructor that allowed them to improve their project methodology and to prepare a more refined final project report. At the end of the semester, students submitted their final report and gave an 8-min oral presentation that allowed students to learn about the procedures and results of all other evaluation projects in the class. Evaluation of PEPs From One Sample Class To evaluate outcomes of the previously described educational activities and assignments, all student PEPs from one semester were reviewed and analyzed. This course had been taught by the first author with the same basic format for the last 8 years. The course selected for evaluation represented an average or middleof-the distribution class, containing neither an exceptionally strong nor exceptionally weak group of students (indicators of this will be discussed in the Discussion section). Results A total of 29 student PEPs were analyzed and the results are summarized in Table 1. The table presents the type of client, goals or problems, number of published references cited, 716 Table 1. Summary of Evaluation Project Reports Client Self (16) Family member, friend, or acquaintance (8) Clients (5) Goal/Problem Repetitive habit (e.g., nail biting, hair pulling, lock checking; 6) Time management, money management, or personal organization (5) Stress, anxiety, or panic (3) Weight loss or high blood pressure (3) Depression or negative thoughts (2) Marital communication or conflict (2) Aggressive behavior (1) Bedwetting (1) Fear of lizards (1) Hyperactive behavior (1) Recurring nightmares (1) Sleep difficulties (1) Social skills deficit (1) Substance abuse (1) Published references cited Mean: 7.2 Range: 3–14 Measures Self-report log (22) Behavioral observation (2) Marital interaction diary (2) Parental or family log (2) Weight scale, skinfold fat caliper (2) Behavior problem checklist (1) Diastolic and systolic blood pressure (1) Random urine drug test (1) Interventions Engaging in an alternative response to replace the problem behavior (4) Habit reversal (4) Progressive muscle relaxation (4) Graduated exposure (3) Self-monitoring (3) Social reinforcement (3) Tangible reinforcement (3) Cognitive behavior therapy (2) Exercise (2) Improved diet (2) Meditation and controlled breathing (2) Awareness training (1) Brief solution focused therapy (1) Brief timeout from reinforcement (1) Cognitive restructuring (1) Goal setting (1) Guided self-dialogue (1) Listening to classical music (1) Personal budgeting (1) Problem solving (1) Response prevention (1) Scheduling work on smaller tasks (1) Sleep hygiene techniques (1) Social skills training (1) Evaluation designs AB (27) AB with reconstructed baseline (1) ABB1 (1) ABAB (1) ABCB (1) A-B-BC-BD-BC-BD-BC (1) B (1) Research on Social Work Practice 22(6) measures used, interventions applied, and single-system designs employed. Type of Client Students utilized themselves as clients in a slight majority of the projects (n ¼ 16) focusing either on self-improvement or alleviating a personal problem. The remaining projects were divided between assisting a family member, a friend, an acquaintance or assisting actual clients. Goal or Problem Selected The most common concern selected for the project was a repetitive habit (e.g., nail biting, hair pulling; n ¼ 6), closely followed by self-management concerns (e.g., time management, money management; n ¼ 5). Stress or anxiety problems (n ¼ 3) and weight loss and other health issues (n ¼ 3) were additional recurring concerns. The remaining goals or problems were highly individualized and varied widely from unconstructive negative thoughts to a fear of lizards. Published References Cited The average number of published references (i.e., printed or online articles, book chapters, and books) cited in the PEPs was 7.2 (range ¼ 3–14; SD ¼ 2.73). Measures Utilized Over two thirds of the measures utilized in the projects were self-report logs or social interaction diaries (n ¼ 22). Behavioral observation or family logs accounted for a much smaller portion of the measures (n ¼ 6). The rest of the measures relied on mechanical devices (e.g., weight scale, blood pressure meter) or other physical tests (e.g., urine drug assay). Interventions Applied Multiple interventions were sometimes applied with a single concern, so the number of interventions exceeded the number of clients. Performing an alternative response to replace a problem behavior, habit reversal, and progressive muscle relaxation were the interventions used at the highest frequency (n ¼ 4 for each). Graduated exposure, self-monitoring, and social or tangible reinforcement were the next most frequency utilized procedures (n ¼ 3 for each). Cognitive behavior therapy, exercise, improved diet, and meditation were each used in two projects. A wide variety of interventions ranging from Brief Solution Focused Therapy to listening to classical music were applied once in the remaining projects. Evaluation Designs Utilized An overwhelming majority of the students utilized a simple AB design (A ¼ baseline phase; B ¼ treatment phase) to evaluate Wong and Vakharia their interventions (n ¼ 27). An AB design with a reconstructed baseline was used in one project and an AB1B2 design (revised treatment procedure in the third phase) was utilized in the second. One student used an ABAB or reversal design, and another student used a variant of this design, an ABCB. Finally, one student utilized a sophisticated A-B-BC-BDBC-BD-BC design to analyze the combined and additive effects of multiple treatment procedures. Outcomes of Student PEPs At the beginning of this class, students were informed that their PEPs would be graded on their quality of planning and execution, not their outcomes. Nevertheless, data on PEP outcomes might give some indication as to the care and competence with which these assignments were carried out. What is an appropriate statistical test of single-system design data is a complex question without an answer supported by a broad consensus. In an effort to objectively quantify the outcomes of the present evaluation projects, we utilized the Conservative Dual-Criteria (CDC) method, a relatively straightforward statistical test developed by single-case design researchers and presented in the Bloom, Fischer, and Orme (2009) text. The CDC (Fisher, Kelley, & Lomas, 2003) requires the calculation of an adjusted baseline mean line and an adjusted baseline regression line that are projected into the treatment phase. In PEPs where the desired outcome of the intervention was to increase behavior, both the baseline mean line and regression line were adjusted by adding .25 standard deviation of the baseline data to the lines. In PEPs where the desired outcome was to decrease behavior, both the lines were lowered by .25 standard deviation of the baseline data. After this step was completed, the number of data points in the treatment phase above both lines was counted in PEPs where intervention behaviors were expected to increase, or the number of data points below both lines was counted in PEPs where behaviors were expected to decrease. If the number of data points in the treatment phase falling above (or below) these two lines equaled or exceeded the criterion number of points for the chosen significance level (based on the binomial test), the null hypothesis was rejected. The CDC test was run on each of the students’ evaluation projects graphs. Thus, if the student had more than one independent variable, she or he would have results from multiple CDC tests. Results of CDC tests of evaluation project data showed that 15 of the 32 graphs contained statistically significant improvements at the .05 level in the transition from baseline to treatment phase. An additional 13 graphs displayed gains ranging from 23% to 600% where differences did not reach statistical significance or the data did not conform with requirements for the CDC test. The remaining graphs revealed no appreciable improvement from baseline to treatment phase, but in no instance was data in the treatment phase worse than in baseline. 717 Discussion and Applications to Social Work Education The data obtained from the PEPs showed that as required by the course assignment students read and reviewed multiple research articles while formulating their simulated client’s goals or concerns. Using this research, students found or designed measures of those goals or concerns, and subsequently selected, adapted, applied, and evaluated evidencebased interventions to address those goals or concerns. Although PEPs were not graded on the basis of outcome, a large proportion of student projects were associated with statistically significant improvements or positive changes that did not reach statistical significance. These improvements suggested that student PEPs were well constructed and often efficacious in alleviating the targeted concern. PEPs encompassed a wide range of goals and problems, objective measures, and interventions, demonstrating during class presentations that social work practice aimed at a broad spectrum of personal and social concerns could be evaluated on an individualized and ongoing basis. In sum, PEPs appear to be a valuable classroom exercise for teaching social work students to evaluate their practice, as mandated by several CSWE standards. One question about this study is whether the class chosen for analysis was representative of other classes, given the same instruction and assignments. In other words, could this particular class be giving an overly favorable impression about student performance and outcomes? It should be mentioned here that a deliberate effort was made to choose an average and not a superior or outstanding class for examination. Final grades and the number of projects employing innovative research designs (e.g., Wong, 2010) are a couple of indicators of class performance, and this class did not earn an especially large number of ‘‘As’’ or employ numerous sophisticated evaluation designs as compared to other classes of this type (the project incorporating an A-B-BC-BD-BC-BD-BC design being the one exception). Another possible criticism of this study and the described course is that issues selected for assessment and intervention in the student PEPs did not represent the sorts of clinical problems encountered by social workers in the field. Indeed, serious problems such as domestic violence and child abuse were not subjects of the PEPs, and students were discouraged from selecting urgent or dangerous issues for this classroom exercise. Difficulties addressed by self-improvement projects, which constituted more than half of the PEPs, were unlikely to have been as severe as problems encountered on the job by professional social workers. However, instructional exercises and illustrations used for training are often less complex and demanding than actual problems dealt with by experienced professionals. Moreover, this course was not primarily clinical in nature and the main focus was teaching the application of assessment and program evaluation techniques. Future work in teaching practice evaluation to social work students should strive to better integrate this instruction with students’ field placements and agency operation. Classroom-based 718 courses separated from students’ field placements, such as the present one, do not prepare students for the difficulties and obstacles of practice evaluation in actual service agencies (e.g., short-term treatments, lack of baseline phases, limited access to standardized measures). In addition, the weak connections between classroom curriculum and field placement settings do not promote the adaption of academic evaluation techniques to the demands and constraints of service settings. Finally, separating instruction in evaluative methods from students’ placement settings gives up the opportunity to demonstrate to agency staff the value of these procedures for gauging treatment outcomes and developing more effective interventions. Although PEPs appear to be a useful exercise for blending instruction in research and practice, it is only a preliminary step in teaching social work students to use evidence-based research and research methodology to formulate and evaluate their practice. Author’s Note This article was invited and accepted at the discretion of the editor. Declaration of Conflicting Interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author received no financial support for the research, authorship, and/or publication of this article. References American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). [Journal Article Reporting Standards (JARS), p. 247–249]. Washington, DC: Author. Anderson, D. A. (2000). Exposing undergraduates to behavior therapy: Self-change projects. Behavior Therapist, 23, 122–128. Research on Social Work Practice 22(6) Barth, R. P. (1984). Professional self-change projects: Bridging the clinical-research and classroom-agency gaps. Journal of Education for Social Work, 20, 13–19. Bloom, M., Fischer, J., & Orme, J. G. (2009). Evaluating practice: Guidelines for the accountable professional (6th ed.). Boston, MA: Pearson/Allyn & Bacon. Council on Social Work Education. [ca. 2008a]. Advanced social work practice in clinical social work (p. 5). Retrieved February 18, 2012, from the Council on Social Work Education’s Website: http://www.cswe.org/File.aspx? id¼26685 Council on Social Work Education. (2008b). Educational policy and accreditation standards (pp. 5, 7). Retrieved February 18, 2012, from the Council on Social Work Education’s Website: http:// www.cswe.org/File.aspx? id¼13780 Dillenburger, K., Godina, L., & Burton, M. (1997). Training in behavioral social work: A pilot study. Research on Social Work Practice, 7, 70–78. Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis, 36, 387–406. Grinnell, R. M., & Unrau, Y. A. (2011). Social work research and evaluation: Foundations of evidence-based practice. New York, NY: Oxford University Press. Johnson, P., Beckerman, A., & Auerbach, C. (2001). Researching our own practice: Single system design for groupwork. Groupwork, 13, 57–77. Morgan, D. L. (2009). Using single-case design and personalized behavior change projects to teach research methods. Teaching of Psychology, 36, 267–269. Noia, J. D., & Tripodi, T. (2008). A primer on single-case design for clinical social workers (2nd ed.). Washington, DC: NASW Press. Royse, D., Thyer, B. A., & Padgett, D. K. (2010). Program evaluation: An introduction. (5th ed.). Belmont, CA: Wadsworth, Cengage Learning. Wong, S. E. (2010). Single-case evaluation designs for practitioners. Journal of Social Service Research, 36, 248–259. SSS 756–D 1 RUBRIC FOR DISCUSSIONS #1, #2, AND #4 Section Possible Points Points Earned Academic Expression/Writing Mechanics: The student’s postings were appropriate for a graduate–level course. Any mistakes in grammar, punctuation, sentence structure, and spelling, if present, were minor and not obtrusive. 3 Quality of Initial Posting: The student’s initial post reflected original thought that was rich in content, analysis, application, and insight. Evidence of support from course materials and other pertinent professional/academic sources was well–documented. 6 Quality of Participation: The student provided substantive, meaningful responses to at least two group members’ initial posts. 4 Timeliness: The student’s initial post and subsequent responses to peers were submitted by their respective due dates and times. 2 Total Points Earned 15
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached.

Running head: SINGLE-SUBJECT DESIGN IN SOCIAL WORK-PRACTICAL
EXAMPLES

1

Single-Subject Design in Social Work-Practical Examples
Student’s Name

Institutional Affiliation

SINGLE-SUBJECT DESIGN IN SOCIAL WORK-PRACTICAL EXAMPLES

2

Single-Subject Design in Social Work-Practical Examples

Asking the 16 students to measure themselves at baseline and intervention may bias
the study through habituation bias. Habitual bias when one is compelled to provide similar
answers to questions have similar wording or perceived meaning (Ariel, Al-Harthy, Was, &
Dunlosky, 2011). When one cannot differentiate between two terms or when the brain is
fatigued, the brain m...


Anonymous
Nice! Really impressed with the quality.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags