Research paper comparing two assessment tools

Anonymous
timer Asked: Nov 19th, 2016

Question description

This is the information I need for the research paper. I have provided reviews for the two assessments used for this paper.

Comparison of Assessment Tool Constructs

In this assignment, you will be comparing the constructs of two assessment tools. Use the Resources provided and the University Library to complete the following:

  • Select an assessment tool from those listed in the Resources area and one additional assessment tool from the literature using the Capella University Library. The assessment tools should measure the same construct. For example, if you select the Beck Anxiety Inventory from the Resources area, you would want to select another tool from the literature that also measures anxiety.
  • Examine the key test measurement constructs of reliability and validity for each tool, and compare these constructs for each tool. You will want to describe the methods used to acquire reliability and validity for each assessment and also discuss how the constructs relate to each other between the two assessment tools.
  • Describe how results on each assessment are interpreted. For example, how are scores interpreted in comparison to group means and norms (for a standardized or norm-referenced test) or to cutoff scores (for criterion-referenced test)? How are scores on this assessment correlated with other tests that measure the same construct?
  • Incorporate a minimum of six scholarly research studies analyzing the effectiveness of each selected assessment tool in professional settings.
  • Based on the review of literature, evaluate which assessment tool has clearer application of measurement concepts.

Please use the Assignment Template listed under Resources to compose your Comparison of Assessment Tool Constructs paper.

Assignment Requirements

  • Written communication: Written communication is free of errors so that the overall message is clear.
  • APA formatting: Resources and citations are formatted according to current APA style.
  • Number of resources: Minimum of six scholarly resources (distinguished submissions will likely exceed that minimum).
  • Length of paper: Six to eight double-spaced, typed pages, excluding title and reference pages.
  • Font and font size: Times New Roman, 12 point.

Review of the Trauma Assessment Inventories by JAMES P. DONNELLY, Assistant Professor, Department of Counseling, School & Educational Psychology, University at Buffalo, Amherst, NY, and KERRY DONNELLY, Clinical Neuropsychologist, VA Western New York Healthcare System, Buffalo, NY: DESCRIPTION. The Trauma Assessment Inventories include three instruments to facilitate assessment of different aspects of trauma experience depending on the clinical need: Screening via The Traumatic Life Events Questionnaire (TLEQ), DSM diagnosis with the PTSD Screening and Diagnostic Scale (PSDS), and cognitively oriented treatment using The TraumaRelated Guilt Inventory (TRGI). The screening and diagnostic instruments (TLEQ and PSDS) are packaged and sold together (the "screening kit"); the TRGI treatment inventory (the "treatment kit") can be ordered separately. The author of all three inventories is Edward Kubany, Ph.D., who has focused his clinical activity and research on assessment and treatment of traumatic stress, especially with veterans, for more than a decade. The TLEQ manual makes the case that traumatic events are far more frequent than is generally known and that a brief structured self-report in the clinical setting may be a very efficient way to obtain trauma history. The instrument includes 22 items with specific events such as motor vehicle accidents, war, sudden death of a close relative or friend, domestic violence, and similar events. Each item includes a set of contingent follow-up questions to assess frequency of occurrence and severity of emotional consequences. The 23rd item allows the respondent to identify any other event not previously listed and concludes with a summary item in which the respondent indicates which of the prior events was the most distressing. The event cited in the last item then becomes the focus of the DSM-oriented PSDS, as well as the TRGI assessment if cognitive-behavioral treatment is under consideration. The PSDS includes 38 items covering the six criteria for a diagnosis of PTSD in the DSM-IV. The PSDS is the most recent version of a measure previously called the Distressing Event Questionnaire (Kubany, Leisen, Kaplan, & Kelly, 2000). The TRGI includes 32 statements that describe guilt-related thoughts and feelings and produces six subscale scores. These measures were developed on adult populations, but only one of the instruments, the PSDS, specifically refers to age considerations in administration in advising usage for individuals age 18 or over. The TRGI is written at a fifth grade reading level. Reading criteria for the TLEQ and PSDS are sixth and eighth grade levels, respectively. English language fluency is assumed. All three instruments are available in paper and computer versions. A brief version of the TLEQ is accomplished by instructing respondents to complete only unshaded items on the response form, reducing completion time to about 7 minutes. Administration by a trained technician and interpretation by a clinical professional is recommended. DEVELOPMENT. The description of the development of the TLEQ in the manual is brief, but it appears that the identification and refinement of items was deliberate and thorough. The primary effort involved examination of the literature and related instruments along with extensive pilot testing (over 1,000 protocols). The final item set was completed after a review by a panel of seven PTSD experts to obtain feedback on the quality of item wording and adequacy of sampling of traumatic events. The 38 items of the PSDS were directly based on PTSD criteria in the DSM-IV. Once again, items were reviewed by PTSD experts. The TRGI was built more from the "ground up" than the two diagnostic instruments, presumably because the construct is more recent and so that interval level scales reflecting aspects of trauma-related guilt could be employed in contrast to the categorical judgments (primary traumatic event, diagnosis) facilitated by the screening kit. The TRGI item development process included integration of relevant thoughts and feelings identified in prior studies, related instruments, clinician expertise, and semistructured interviews with trauma survivors. Ultimately, an initial pool of 120 items was reduced to a test set of 40 and a final set of 32 covering six constructs (subscales are Global Guilt, Distress, and four on specific guilt-cognitions including Hindsight Bias/Responsibility, Insufficient Justification, Wrongdoing, and a General subscale). The establishment of the six scales was based on factor analysis of three samples (two college groups with Ns of 200 and 125 and one battered women support group sample of 100). Item analysis with these samples was aimed at retention of items that met six criteria: variability in response evident in no more than 50% of respondents selecting any particular response anchor, item correlations below .9 with all other items, factor loading of .5 on primary factor with .3 or greater difference between primary and all other factors, and review of expected clinical utility of each item. The final 32-item set was said to produce a "robust and very stable" (TRGI manual, p. 26) four-factor solution that included 22 of the items. Unfortunately, the otherwise complete report in the manual does not include tables of the factor analysis results, and the text is limited in describing these analyses. TECHNICAL. The TLEQ. Because the TLEQ does not produce scale scores, the psychometric analysis focused on temporal consistency of reports of trauma in several samples, as well as content validity. Temporal consistency was examined in four samples over varying intervals from 5 days to 2 months. The samples included 51 Vietnam combat veterans, 49 residential substance abuse patients, 62 college students, and 42 members of a support group for battered women. Each item (traumatic event + emotional consequences) was assessed in terms of percent agreement and Cohen's kappa. The lowest temporal consistency levels were observed in the "other accident" category, with percent agreement ranging from 63 to 88 and kappa coefficients from .27 to .59. The interpretation of these numbers suggests that recall of particulars from a large and varied set is more complex than from a singular traumatic event, such as a life-threatening illness. The simple percentage agreement numbers may be more informative than coefficient kappa because: (a) kappa assumes that raters are independent (here they are the same people) and (b) kappa coefficients can be quite low even when agreement is actually high, depending on such parameters as cell size and base rate. The primary validity issue addressed in the manual is content validity. The case in favor of content validity is primarily made through the argument that the set of events in the TLEQ is more comprehensive than other similar measures, and that sample estimates of exposure to traumatic events tend to be higher with the TLEQ than other methods, including interviews. In addition, the instrument significantly discriminated between women with and without PTSD in a support group for battered women (N = 61). The PSDS. Reliability studies on the PSDS included internal consistency of symptom reports, test-retest correlations, and temporal stability in terms of diagnosis based on PSDS score. All of these reliability analyses suggest good to excellent consistency of this instrument. Coefficient alpha estimates were above .80 for all clusters of PTSD symptoms in a sample of Vietnam veterans (N = 120), and four samples of women who had experienced domestic violence and other forms of abuse (Ns = 82, 75, 74, and 24). The lowest alpha estimates were .69 for clusters C and D in the veterans group. Test-retest correlations were examined in samples of battered women support group participants and veterans with varying time intervals (no indication is given as to why the intervals vary) averaging about 10 and 17 days, respectively. Another study examined the consistency of response over a 1-week period when paper and computer versions were utilized. In this study, 76 individuals from a variety of outpatient settings were given the alternate test formats 1 week apart. The correlation of total symptoms between the two versions was .81, and there was also good consistency in terms of classification in terms of PTSD diagnosis. Validity of the PSDS was examined in homogeneous and heterogeneous populations with similar validity questions addressed in both kinds of samples. Diagnostic accuracy was studied by comparing PSDS responses to those obtained from a structured clinical interview (the ClinicianAdministered PTSD scale). Analyses included sensitivity, specificity, and positive and negative predictive power. In addition to examining the ability of two scoring methods to discriminate between PTSD presence versus absence, the author broke down the accuracy data by sample and gender. Interestingly, for all samples except battered women, the optimal cutoff score for the symptom total was 26; for the women who had suffered sexual trauma, the optimal cutoff was 18. The extensive tables provided on these analyses are informative and provide good evidence of discriminant validity. In addition, convergent validity was shown to be good via correlations with previously developed clinical scales in both the homogeneous and heterogeneous samples, and again the reporting in the manual is complete. The TRGI. Reliability of the TRGI is reported in terms of internal consistency and temporal stability. Internal consistency (coefficient alpha) is given for five samples including two groups of combat veterans and three groups of individuals who had experienced physical abuse trauma (Ns range from 68 to 269). Of the 30 coefficients (five samples by six subscales), only two were below .70, and 21 were at least .80. Smaller samples of college students (N = 32) and veterans (N = 69) were given the TRGI twice, with approximately 1 week between administrations. Testretest correlations ranged from .73 to .86 across the six subscales and two samples. Validity studies have included discriminant validity demonstrating significant discrimination of PTSD diagnosis (in terms of mean differences between groups and Receiver Operating Characteristics [ROC] curve studies) as well as sensitivity to change in a randomized treatment study of cognitive therapy with PTSD patients. COMMENTARY. In reviewing these instruments as a set we come away impressed with the likely clinical utility of these measures. They appear to have adequate psychometric support and the test manuals are generally well written and complete (some exceptions were noted above). The TRGI manual includes additional treatment material that may be quite useful in practice. There is the potential concern of investigator bias when so much of the supporting data have come from a single source, but the procedures followed in development of these measures meet generally accepted standards, and in some instances exceed them. One minor criticism is that we have seen other WPS manuals use the same overly general and outdated comments regarding effect size. SUMMARY. As the test manual warns, no self-report instrument should be used by itself to make a diagnosis of PTSD. Nonetheless, the Trauma Assessment Inventories constitute a psychometrically sound set of tools for initial identification of traumatic experiences and for evaluation of posttraumatic symptoms. In addition, the TRGI appears to be a well-developed tool for assessment and treatment of guilt-related thoughts and feelings of trauma survivors in the context of cognitively oriented therapy. REVIEWERS' REFERENCE Kubany, E. S., Leisen, M. B., Kaplan, A. S., & Kelly, M. P. (2000). Validation of a brief measure of posttraumatic stress disorder: The Distressing Event Questionnaire (DEQ). Psychological Assessment, 12, 197-209. Review of the Trauma Assessment Inventories by CARL J. SHEPERIS, Assistant Professor of Counselor Education, and APRIL K. HEISELT, Assistant Professor of Counselor Education, Mississippi State University, Starkville, MS: DESCRIPTION. The Trauma Assessment Inventories set contains three instruments including the screening kit, composed of the Traumatic Life Events Questionnaire (TLEQ) and PTSD Screening and Diagnostic Scale (PSDS), and the treatment kit containing the Trauma-Related Guilt Inventory (TRGI; TLEQ, PSDS, and TRGI) designed to work in conjunction with one another to assist clinicians in making determinations about Posttraumatic Stress Disorder (PTSD) and the guilt associated with trauma survivors. The complete set or individual components are available for purchase. Additional manuals, testing forms, answer sheets, and computerized components for the Trauma Assessment Inventories are also available from the publisher. The TLEQ is a 24-item self-report questionnaire used as a brief screener for PTSD symptomology. Respondents answer questions that address their exposure to 21 traumatic life events, indicating if and how often (i.e., once, twice, three times, four times, five times, or more than five times) particular life events occurred. Respondents are also asked about the presence of feelings of intense fear, helplessness, or horror in relation to those events. The TLEQ concludes with an open-ended question that asks respondents to indicate the one event that caused them the most distress. The TLEQ takes approximately 10 to 15 minutes to complete and can be administered to individuals with a sixth grade reading level. The PSDS, a 38-item self-report questionnaire, is employed to assess the severity of the experienced event reported in the open-ended section of the TLEQ. Clinicians also may use the PSDS in reference to a given trauma. The PSDS contains 17 key components that match the six criteria for PTSD as defined by the fourth edition of the Diagnostic and Statistic Manual of Mental Disorders (American Psychiatric Association, 1994). Respondents use a 5-point Likert scale (i.e., "absent or did not occur" to "present to an extreme or severe degree") to indicate the severity of symptoms they have experienced over a 30-day period. Respondents also indicate when their symptoms began and the length of time they have continued. The PSDS is designed for respondents who are 18 years of age or older with an eighth-grade reading ability. Both the TLEQ and the PSDS have abbreviated and long versions. The TRGI is designed for clinicians who treat PTSD, and can be used to identify guilt-related feelings associated with traumatic events. The TRGI is a 32-item self-report questionnaire that employs a 5-point Likert scale (i.e., "extremely true" to "not true at all") to make determinations about respondent guilt feelings. TRGI items are scored on three separate dimensions: Global Guilt, Distress, and Guilt Cognitions. The TLEQ and PSDS can be administered either via computer or in pencil-and-paper format. Because of the nature of the questions regarding traumatic experiences, the author suggests that trained clinicians debrief with respondents following the completion of the TLEQ. Clinicians clarify the specific event they want respondents to refer to when completing the PSDS. Although technicians can administer self-report inventories, interpretation of the results should be carried out by professionals with psychometric training. The TLEQ and PSDS are self-report questionnaires that do not result in standardized scores. Users of the TLEQ can generate a rough index of the magnitude and severity of the trauma reported by respondents. These include the number of TLEQ events that have occurred in the life of a respondent; the number of TLEQ events that both occurred and evoked intense fear, helplessness, or horror; and the total number of events reported by the respondents and tallied by clinicians. Much like the TLEQ, there are no standardized scores for the PSDS. Clinicians also may consider symptom ratings in order to obtain informal measures of respondent distress. The TRGI employs an Auto Score form whereby users transfer responses to the appropriate scale column. Scores are summed to determine raw scores in each of the six response value areas, although no separate general guilt cognition scores are interpreted on the TRGI. Raw scores are then plotted to normalized T-scores with a mean of 50 and standard deviation of 10. Average scores range between 45T and 55T. Users of the TRGI should review the content of the information provided by respondents contributing to high TRGI scores. The trauma assessment inventories also may be scored via computer. DEVELOPMENT. Instrument development for the TLEQ was approached from a theoretical stance. Items were created following a review of relevant literature, examination of instruments that assess exposure to traumatic events, collecting open-ended responses from more than 1,000 completed preliminary TLEQ versions, and evaluating pilot item content. Content validity for the TLEQ and PSDS were established through the use of expert-review panels consisting of published PTSD experts and clinical psychologists specializing in PTSD. The TRGI was developed using four sources including a review and analysis of guilt literature, examination of guilt scales, and structured interviews and clinical work with trauma survivors. An initial pool of 120 items was created to reflect the six dimensions of guilt. From this pool, a 40-item preliminary TRGI was prepared and distributed to two samples of college students who reported experienced traumas and 100 battered women attending support groups. TECHNICAL. Four studies were conducted in order to determine the test-retest reliability of the TLEQ. Populations in the studies included: 51 Vietnam combat veterans (5- to 45-day test-retest intervals), 49 men and women in a residential substance abuse program (2-month interval), 62 college students (1-week interval), and 42 women attending support groups for battered women (2-week interval). Across the samples most items indicated adequate temporal stability, especially as it related to items asking about childhood physical abuse (kappas = .63 to .91), witnessing family violence (kappas = .60 to .79), childhood sexual abuse by someone more than 5 years older (kappas = .70 to .90), and stalking (kappas = .59 to .84). Poorest temporal consistency was related to nonmotor vehicle accidents. High test-retest reliability was found as it related to the reports of intense fear, horror, and helplessness as related by the women attending support groups for battered women. It should be noted that the samples used in these studies were relatively small. Content validity procedures for the TLEQ were not clearly delineated in the manual. The author of the manual claimed that content validity was established because a large portion (93% in one study and over 99% in another) reported having experienced traumatic events. This reasoning is circular and does not clearly provide evidence of content validity. Internal consistency for the PSDS was tested on 120 male Vietnam combat veterans and four groups of women including: 82 women sexually abused by a household member before the age of 18 who had received services in the previous year from an agency or provider that serves incest survivors; 75 women sexually assaulted after the age of 12 who received services in the previous year from an agency that serves rape victims; 74 women abused by an intimate partner who received services in the previous year from an agency that serves battered women; and 24 women with histories of prostitution, substance abuse, and sexual abuse. Alpha coefficients for each criterion ranged from .80 to .98. Test-retest reliability was assessed with 52 Vietnam combat veterans. Test-retest interval was from 5 to 45 days. The test-retest correlation coefficient for total PSDS symptoms was .95. The test-retest also was assessed with 54 women receiving support group counseling services from a nonprofit community agency that serves battered women. The test-retest interval ranged from 7 to 21 days and initial test-retest coefficient for total PSDS symptom scores was .83. Two studies were conducted to determine the validity of the PSDS. One study, consisting of a homogeneous patient group of 120 Vietnam combat veterans, was conducted to compare structured interview assessments with protocol results. The PSDS correctly classified the PTSD status of 86% of the sample. An additional study found that when using a score of 18 or higher for making a PTSD diagnosis, the PSDS correctly classified PTSD status of between 83% and 93% of the four samples of women and 90% of the combined sample of 255 women. To determine test-retest reliability the TLEQ and TRGI were distributed to 60 college students enrolled in an undergraduate psychology class. One week later the test was given to 32 students (23 students did not report trauma exposure and did not complete the TRGI and 5 others were dismissed due to missing data). Test-retest correlation coefficients for the Global Guilt, Distress, and Guilt Cognition scales were .73 or above. Test-retest correlations also were given to a sample of 69 military veterans. The interval for this group was 8 days and the coefficients for the Global Guilt, Distress, and Guilt Cognition scales were .84 or above, indicating strong test-retest reliability. In samples of 74 Vietnam combat veterans and 68 women in battered women's support groups the Guilt Cognitions scale significantly correlated with measures of depression and PTSD. COMMENTARY. The TLEQ produces no formal scores. Instead it relies on respondents' reports of their experiences. Thus, users are cautioned that the PSDS should not be the sole instrument used to determine PTSD. Future research needs to be conducted in order to determine the extent of content-validity. Of particular importance with regard to the TRGI is that item development was based on a limited number of clinical interviews (Beckham, Feldman, & Kirby, 1998, p. 779). There is a paucity of research on the TLEQ, PSDS, and TRGI. Thus, further research is needed to determine the reliability and functionality of the instruments. SUMMARY. The Trauma Assessment Inventories set including the TLEQ, PSDS, and TRGI comprise a low cost inventory set that is relatively easy to score and can provide seemingly accurate estimates of PTSD symptomology. It should be noted that none of these instruments alone should be used to diagnose PTSD as additional factors can mitigate diagnosis. Because the sample sizes employed during instrument development were small, more peer-reviewed research is needed in order to eliminate questions of validity for these instruments. Some aspects of the test manuals are confusing with regard to scoring procedures. However, illustrations in the test manuals prove useful. REVIEWERS' REFERENCES American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author. Beckham, J., Feldman, M., & Kirby, A. (1998). Atrocities exposure in Vietnam combat veterans with chronic posttraumatic stress disorder: Relationship to combat exposure, symptom severity, guilt, and interpersonal violence. Journal of Traumatic Stress, 11, 777-785.
Review of the Detailed Assessment of Posttraumatic Stress by ROGER A. BOOTHROYD, Associate Professor, Department of Mental Health Law and Policy, Louis de la Parte Florida Mental Health Institute, University of South Florida, Tampa, FL: DESCRIPTION. The Detailed Assessment of Posttraumatic Stress (DAPS) is a 104-item self-report measure assessing exposure to trauma and posttraumatic response. The measure is intended for use with individuals who have undergone a significant psychological stressor. It can be used to assist clinicians in determining the presence or absence of a probable Posttraumatic Stress Disorder (PTSD) or Acute Stress Disorder (ASD) diagnosis. The DAPS can be group or individually administered and scored by individuals with no specialized training. It is appropriate for use with persons 18 years of age and older and requires that respondents have at least a sixth-grade reading level. The measure takes between 20 and 30 minutes to complete and can be scored and profiled in approximately 20 minutes. After reporting their exposure to various potentially traumatic life events, respondents report the severity of the most traumatic experience and the frequency of various symptom clusters using a 5-point scale with varying anchors ranging from a low of 1 to a high of 5. The DAPS contains 13 scales. It has two validity scales designed to identify respondents who deny or underreport their symptoms (i.e., Positive Bias) as well as those who overreport their symptoms and endorse usual symptoms (i.e., Negative Bias). Four scales evaluate respondents' lifetime exposure to trauma. The Relative Trauma Exposure scale assesses the extent to which respondents have been exposed to multiple sources of trauma. A single item on the Onset of Exposure scale determines the recency of the traumatic event. The Peritraumatic Distress scale evaluates the severity of the distress respondents experienced at the time the event occurred. The Peritraumatic Dissociation scale determines whether a respondent dissociated during the traumatic event. Three of the scales relate to common PTSD symptom clusters (i.e., Intrusive Reexperiencing, Avoidance/Numbing, and Autonomic Hyperarousal). Additionally the DAPS contains a summary scale (i.e., Posttraumatic Stress-Total) and a scale to assess impairment of psychosocial functioning (i.e., Posttraumatic Impairment). Finally, the DAPS includes three scales assessing associated features of PTSD (i.e., Trauma-Specific Dissociation, Suicidality, and Substance Abuse). DEVELOPMENT. Originally the author developed 190 items to examine response validity and to assess the diagnostic criteria set forth for Posttraumatic Stress Disorder (309.81) in the DSM-IV-TR. The items were subsequently reviewed by clinicians experienced in treating PTSD and 59 items were eliminated because of redundancy or inadequacy. The remaining 131 items were administered to 105 individuals to obtain preliminary psychometric information. Based on these results, an additional 27 items were eliminated, resulting in the current 104-item measure. TECHNICAL. Scoring & Standardization. In the absence of missing responses, raw scores are simply the sum of the item responses within a symptom domain. The manual includes scoring instructions for handling missing responses, including the number of items that can be missing and still produce a valid score, which differs by subscale. Raw scores are converted to standardized T scores by either plotting them on the profile sheets provided or by using the normative conversion tables in the test manual. Normative information was obtained from responses to a stratified (based on geographical location) random sample of adults obtained from a national sampling service. The DAPS and other related materials for assessing its psychometric properties were mailed to an unspecified number of individuals. In addition, 70 college students were administered the same protocol. The author does not provide any information on the response rate for this mailing or the extent to which respondents were representative of the initial sample of individuals to which materials were mailed. The normative sample included 446 participants who completed and returned the DAPS and who reported at least one DSM-IV-TR level traumatic event. Descriptive data on the age, gender, and racial/ethnic composition of respondents from the normative sample are provided. The effects of these respondent characteristics were examined in relation to scale scores. No age or racial/ethnic differences were found. Gender differences were found on a number of scales. Given this, gender specific profiles and normative tables were developed and are provided in the user manual. The manual also provides decision rules to assist users in determining the likelihood that a respondent has a PTSD or ASD diagnosis. The decision rules used to establish a probable diagnosis of PTSD employ symptom scale cutoff scores and those used to identify an ASD diagnosis are based on a criterion-based method. Four examples based on the responses of trauma-exposed individuals are provided to assist DAPS users with score interpretation and determining diagnosis. Reliability. In addition to the normative sample, reliability information is provided based on a combined clinical/community sample and a university sample. Internal consistency reliability in the form of Cronbach's alpha is the only type of reliability information provided. The majority of the 13 multiple-item scales have Cronbach coefficients above .8 across the three samples. The internal consistency estimates on the Positive Bias scale range from .61 to .80 whereas the Negative Bias and Relative Trauma Exposure scales have coefficients in the .49 to .67 range. The author notes that the lower internal consistency reliabilities associated with these two scales is of less concern given that the Negative Bias scale reflects endorsement of unusual symptoms and is not intended to be internally consistent and the Relative Trauma Exposure scale is a count of respondents' exposure to different types of trauma and is not representative of a specific symptom domain. Validity. Three types of evidence are provided in support of the validity of scores from the DAPS: the relationship of DAPS scores with conceptually important variables, its convergence with similar measures, and its discrimination from less-related measures. In terms of theoretically meaningful variables, DAPS scale scores were correlated with the total number of traumas respondents experienced, the type of trauma experienced (i.e., interpersonal [e.g., rape, physical assault] versus noninterpersonal [e.g., disaster, motor vehicle accident]), and the amount of distress experienced at the time of the trauma, to determine if the scales were associated with these variables in the manner suggested by the existing literature. The number of lifetime traumas experienced (i.e., RTE scale) was significantly correlated with most of the symptom scales. As expected, greater exposure to trauma was associated with increased reporting of distress, reexperiencing, avoidance, hyperarousal, posttraumatic impairment, dissociation, and suicidality. Similarly, respondents' ratings of the level of distress experienced at the time of the trauma were found to be significant predictors of subsequent posttraumatic stress levels. As the literature would suggest, higher levels of distress were associated with higher levels of reexperiencing, avoidance, hyperarousal, and posttraumatic impairment. Finally, as anticipated, respondents who experienced interpersonal trauma reported significantly higher levels of distress, reexperiencing, avoidance, and suicidality than did those who had experienced noninterpersonal trauma. Convergent and discriminant validity were assessed by correlating the DAPS scales with various other measures. In general, the Positive Bias and Negative Bias scales of the DAPS were significantly correlated in the anticipated direction (in the ± .4 to .5 range) with the validity scales from the Trauma Symptom Inventory (Briere, 1995), the Minnesota Multiphasic Personality Inventory (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), and the Personality Assessment Inventory (Morey, 1991) indicating good convergent and discriminant validity. Similarly, the DAPS symptom scale scores were also correlated with symptom scales from these other measures as an assessment of convergent validity. Over 92% of these correlations (46 of 51) were above .60 whereas 33% exceeded .70. When the DAPS subscale scores were correlated with scales from less-related measures (as an assessment of discriminant validity) the magnitude of the correlations was less than .5 in 75% of the cases. The same strategy was used to assess the convergent and discriminant validity of the DAPS associated feature scales and produce similar support for the validity of these scales. The DAPS has also been shown to have a high level of diagnostic agreement (Kappa = .73) with the Clinician-Administered PTSD Scale (CAPS; Blake et al., 1990), a measure that requires nearly three times the amount of time to administer compared to the DAPS. Additionally, the DAPS has been shown to have good sensitivity (.88) and specificity (.86). COMMENTARY. The DAPS was developed to closely conform to the PTSD diagnostic criteria and includes specific scales for each of the three symptom clusters as well as individual scales for three associated features. Its developer is clearly an expert in the field of PTSD assessment. The manual is well written and contains information of the measure's development, administration, scoring, interpretation, and psychometric properties as well as normative information on nearly 450 trauma-exposed individuals. However, additional information on the norming sample such as the response rate to the mailing and representativeness of respondents to the original sample would have been helpful to include. The data provided on the DAPS to date suggest that it is a psychometrically sound measure of PTSD. SUMMARY. The DAPS is a promising measure for assessing PTSD and distinguishing it from ASD. Although information on the stability of the measure is lacking, other psychometric properties are quite acceptable. REVIEWER'S REFERENCES Blake, D. D., Weather, F. W., Nagy, L. M., Kaloupek, D. G., Klauminzer, G., Charney, D. S., & Keane, T. M. (1990). A clinician rating scale for assessing current and lifetime PTSD: The CAPS-1. The Behavior Therapist, 13, 187-188. Briere, J. (1995). Trauma Symptom Inventory. Odessa, FL: Psychological Assessment Resources, Inc. Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Minnesota Multiphasic Personality Inventory (MMPI-2): Manual for administration and scoring. Minneapolis: University of Minnesota Press. Morey, L. C. (1991). Personality Assessment Inventory: Professional manual. Odessa, FL: PAR Psychological Assessment Resources, Inc. Review of the Detailed Assessment of Posttraumatic Stress by LARISSA SMITH, Presley Center for Crime and Justice Studies, University of California, Riverside, CA: DESCRIPTION. The Detailed Assessment of Posttraumatic Stress (DAPS) is a 104-item, self-report paper-and-pencil inventory measuring extent of trauma exposure and post-traumatic response. Subscales include three trauma-specific subscales (Relative Trauma Exposure, Peritraumatic Distress, Peritraumatic Dissociation), five posttraumatic scales (Reexperiencing, Avoidance, Hyperarousal, Posttraumatic Stress total, Posttraumatic Impairment), three associated features scales (Trauma-Specific Dissociation, Substance Abuse, and Suicidality), and two validity scales (Positive Bias and Negative Bias). Questions are either dichotomous or on a 5-point Likert scale and address experiences and behaviors both during the event and during the month before protocol administration. The DAPS converts to a scoring sheet designed to facilitate handscoring, including a worksheet area for evaluating the extent to which the examinee meets DSM-IV criteria for Post-Traumatic Stress Disorder (PTSD). Profile charts are provided for both men and women, allowing visual comparison to the normative sample using standardized T scores. The test can be administered by nonclinical staff, though the interpretation of scores and profiles requires graduate training in clinical/counseling psychology and in test interpretation. Specific instructions for examinees are included in the manual. The manual also provides guidelines for dealing with missing data, calculating raw scores, and T-score conversion, and provides sample profile interpretation case studies. The test takes approximately 20-30 minutes to complete and requires the equivalent of a sixth-grade reading level. DEVELOPMENT. An initial pool of 190 items was reduced to 104 through consultation with expert clinicians and through item analysis based upon the first 105 participants in the normative sample. Scales appear to have been developed based upon DSM-IV criteria and prior research in the field of posttraumatic stress. The manual provides a detailed description of each scale, including item content, characteristics of high scorers, and empirical and/or theoretical basis. TECHNICAL. Standardization. The authors employed a national sampling service to construct a stratified random sample of adults from Department of Motor Vehicles registries and telephone books. Participants were mailed the DAPS as part of a battery of instruments assessing trauma-related experience, including a survey specifically designed to evaluate the presence or absence of childhood and adult trauma. Of the participants contacted, 620 completed the protocol (there is no information about what response rate this represents), and 446 of those participants reported having experienced at least one DSM-IV-TR-level incident in the past. In addition, a sample of 70 university students were administered the protocol in order to extend the age range of the sample. In the trauma-exposed sample, mean age was 45.6 years (range 18-91). Just over half the respondents were men, and the sample was primarily (83.4%) Caucasian. Analyses of this sample showed no age or ethnicity differences in scale scores. Some scales have gender differences; the manual presents norms by gender and separate profile sheets to take those differences into account. Reliability. When the normative process was complete, further data were collected in a clinical/community sample and an undergraduate sample (N = 257) in order to assess reliability and validity. Participants in the undergraduate sample also reported at least one prior traumatic event. The clinical sample (N = 191) was recruited by clinicians in various parts of the United States; the community sample (N = 58) was recruited through flyers and newspaper advertisements and included participants with at least one prior traumatic event. The community sample was primarily female (80%) and Caucasian (77%), mean age of 35 years. The university sample was similarly primarily female (74%) and Caucasian (84%), mean age of 19.6 years. Alpha reliabilities for the scales range from .52 (Negative Bias) to .96 (Posttraumatic Stress-Total) overall, though it should be noted that the Negative Bias scale is intended to detect fake-bad responses based upon endorsement of bizarre or unusual symptoms and is not intended to measure a single construct. Reliabilities were marginally lower in the university sample than in the clinical/community sample. No information on test-retest reliability is provided. Most interscale correlations are above .30. Validity. The manual provides an impressive amount of validity information. Depending upon the subsample, the DAPS was co- administered with some subset of at least a dozen scales, including the Minnesota Multiphasic Personality Inventory-II (MMPI-2). The DAPS Positive Bias (PB) and Negative Bias (NB) scales correlate substantially, and in the predicted direction, with validity scales from other traumaand personality-related assessment instruments. Correlations between the symptom scales and symptom scales of other inventories are presented with their associated Ns and are mostly in the .60-.80 range. Correlations with scales measuring antisocial personality, mania, and somatic complaints, theoretically and empirically unrelated to PTSD, were substantially lower, in the .10-.30 range. Diagnostic utility for PTSD assessment was assessed against the Clinician-Administered PTSD Scale (CAPS). On the basis of the CAPS, a subsample of participants from the clinical sample was categorized as PTSD positive (N = 25) or PTSD negative (N = 44). The DAPS miscategorized only nine of this sample-six false positives and three false negatives-producing a _appa of .73, sensitivity of .88, and specificity of .86. Diagnostic utility for Acute Stress Disorder (ASD) was not assessed due to the recency of the diagnostic category and the paucity of structured clinical interview assessments, and the manual recommends caution in diagnosing ASD solely on the basis of the DAPS decision rules. COMMENTARY. The DAPS manual presents a very thorough survey of currently used PTSD assessment instruments and makes a wellsupported case for the DAPS' contribution over and above existing instruments. There is also an impressive level of detail in the scale descriptions, the administration and scoring instructions, and the profile interpretation examples. Information on the normative sample and the validational process is also presented in considerable detail. With regard to the normative sample itself, a substantial amount of care was taken in the acquisition and composition of traumatized and comparison subsamples. The scales have a higher degree of overlap (as indicated by interscale correlations) than might be considered optimal, but the scale structure is well-grounded in prior literature. Items are for the most part clearly worded and readily understandable, though there is the occasional digression into the vernacular (e.g., Item 24, "You 'spaced out.'"). The answer sheet is a bit awkward in its flow, but the ease of scoring and diagnosis that it provides outweighs that awkwardness. Profile sheets are easily understood and completed. That the DAPS allows for both scale scoring and profile interpretation is a benefit, especially given the detailed examples and interpretive instructions. SUMMARY. The DAPS can be administered easily and relatively quickly, and provides a large amount of information. Issues of missing data, response bias, gender effects, and data administration conditions are all adequately addressed in the manual, and evidence for internal consistency reliability and for discriminant, convergent, and predictive validity is fair to good. Scales are well-grounded theoretically and empirically, and the author provides a well-done and detailed placement of the DAPS within the body of already-existing instruments. The bulk of the evidence weighs in favor of the test's utility for its stated purpose.

Tutor Answer

(Top Tutor) Studypool Tutor
School: New York University
Studypool has helped 1,244,100 students
flag Report DMCA
Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors