Multiple Raters

User Generated

barcvrprcvengr

Writing

psyc 6551

Description

Question: There are many instances in which multiple raters use a single psychometric measure to evaluate one individual, such as in job performance appraisals. You may have heard of 360-reviews, which allow multiple people who work with an employee (typically peers, subordinates, and supervisors) to provide feedback on performance. It is hoped that with multiple sources of input, a more fair and complete version of an employee's performance can be gained. There are considerations that need to be addressed, however, when implementing a multiple-rater assessment. A strategy must be devised to combine the multiple evaluations. The scores may be averaged, evaluated using a rating scale, or one rater or one pair of raters may be selected as the "best" and those scores used. It is also necessary to examine the reliability of an assessment. Intra-class correlation and kappa are two statistics often used to measure inter-rater reliability. These tools tell you the degree to which the raters agree in their scores, and they are useful in improving assessments and rater training. To prepare for this Discussion, consider how you might combine multiple raters' evaluations of an individual on a measure of job performance. Also consider the psychometric implications of multiple-raters and how you might improve reliability of this type of assessment. With these thoughts in mind: Post an explanation of how you might combine multiple raters' evaluations of an individual on a measure of job performance. Provide a specific example of this use. Then explain psychometric implications of using multiple raters. Finally, explain steps you could take to improve the reliability of a multi-rater assessment. Support your response using the Learning Resources and the current literature. Resources Cattell, R. B., & Saunders, D. R. (1950). Inter-relation and matching of personality factors from behavior rating, questionnaire, and objective test data. Journal of Social Psychology, 31(2), 243-260. Retrieved from the Walden Library databases. Fisher, S. T., Weiss, D. J., & Dawis, R. V. (1968). A comparison of Likert and pair comparisons techniques in multivariate attitude scaling. Educational and Psychological Measurement, 28(1), 81-94. Fisher , S. T., Weiss, D. J., & Dawis, R. V., A comparison of Likert and pair comparisons techniques in multivariate attitude scaling, in Educational and Psychological Measurement. Copyright 1968 Sage Publications Inc. Journals. Used with permission from Sage Publications, Inc. via the Copyright Clearance Center. Lissitz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte Carlo approach. Journal of Applied Psychology, 60(1), 10-13. Retrieved from the Walden Library databases. MacCallum, R. C., & Tucker, L. R. (1991). Representing sources of error in the common factor model: Implications for theory and practice. Psychological Bulletin, 109(3), 502-511. Retrieved from the Walden Library databases. McCrae, R. R. (1994). The counterpoint of personality assessment: Self-reports and observer ratings. Assessment, 1(2), 159-172. McCrae, R. R., The counterpoint of personality assessment: Self-reports and observer ratings, in Assessment. Copyright 1994 Sage Publications Inc. Journals. Used with permission from Sage Publications, Inc. via the Copyright Clearance Center. Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift's electric factor analysis machine. Understanding Statistics, 2(1), 13-32. Retrieved from the Walden Library databases. Rothstein, H. R. (1990). Interrater reliability of job performance ratings: Growth to asymptote level with increasing opportunity to observe. Journal of Applied Psychology, 75(3), 322-327. Retrieved from the Walden Library databases. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420-428. Retrieved from the Walden Library databases.

User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

hello, find the attached paper. in case of any issues feel free to raise. thanks

Surname1
Name
Course
Professor
Date
Job Evaluation (Multi-raters)
In today’s world, there is an increase in the need for evaluation. Nearly all organizations
need to do an evaluation of their employees and performance. This call for good, clear and
accurate methods of performing the evaluation of employees. In the past, there has been the use
of the single method of measuring evaluation of individuals. These single methods have not been
effective. This is because the information was biased and not accurate. The use of multiple
sources and raters will ensure that the information is accurate and fair. This is because the output
will not depend on a single input or source. In combining the single systems, there is...


Anonymous
Great! 10/10 would recommend using Studypool to help you study.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags