Access Millions of academic & study documents

Interrater agreement Kappa statistic

Content type
User Generated
Showing Page:
1/4

Sign up to view the full document!

lock_open Sign Up
Showing Page:
2/4

Sign up to view the full document!

lock_open Sign Up
Showing Page:
3/4

Sign up to view the full document!

lock_open Sign Up
End of Preview - Want to read all 4 pages?
Access Now
Unformatted Attachment Preview
360 Family Medicine May 2005 Research Series Understanding Interobserver Agreement: The Kappa Statistic Anthony J. Viera, MD; Joanne M. Garrett, PhD Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described. (Fam Med 2005;37(5):360-3.) In reading medical literature on diagnosis and interpretation of diagnostic tests, our attention is generally focused on items such as sensitivity, specificity, predictive values, and likelihood ratios. These items address the validity of the test. But if the people who actually interpret the test cannot a ...
Purchase document to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.
Studypool
4.7
Indeed
4.5
Sitejabber
4.4