Showing Page:
1/3
Question :
How to manage Bland Altman analysis
with more than one couple of
measurements?
I consider three readers circularly involved in Bland-Altman
2-by-2; threfore I obtained 3 Bland-Altman analysis with
corresponding Limit of Agreement (and Confidence Interval).
In order to get more accurate estimations, I would like to
average the three Limit of Agreement but I am not sure this is
correct. That's my first question. My second concern is about
confidence interval when averaging standard deviation that
themselves have CI. How to compute Confidence interval for
averaged Standard Deviation (or Limit of Agreement)?
Thanks for the help.
Solition :
Can you give us a bit more info on what you're trying to do?
I'm not familiar with the expression "three readers circularly
involved". What's your data type, number of
measurements/raters and repeats?
In general for stats of agreement:
Cohen's kappa gives the agreement coefficient for nominal
grades and each outcome is binary (agreement or not
agreement) [1-3]. If you want to take into account the degree
of agreement or disagreement you can use a weighted kappa
(with linear or quadratic weights usually). Both kappas are
corrected for chance [1].
For continuous variables you need either the intra-class
correlation coefficient ICC [8] or Bland-Altman BA [7].
Showing Page:
2/3
Note that for ordinal data the ICC and weighted kappa with
quadratic weighting are comparable (and exactly equivalent
for uniform marginal distributions [8,10]).
BA plots is a graphical representation of the same concept,
showing for two assessors (or groups of assessors) the
difference for each assessment against the mean of each
assessment [4, 11]. As such it shows any bias and the limits of
agreement between two raters.
1 Cohen J. Weighted kappa: nominal scale agreement with
provision for scaled disagreement or partial credit. Psychol
Bull. 1968;70:213-20.
2 Cohen J. A coefficient of agreement for nominal scales.
Educ Psychol Meas. 1960;20:207-16.
3 Kraemer HC. Extension of the Kappa-Coefficient.
Biometrics. 1980;36:207-16.
4 Bland JM, Altman DG. Statistical Methods for Assessing
Agreement between Two Methods of Clinical Measurement.
Lancet. 1986;1:307-10.
5 Landis JR, Koch GG. An application of hierarchical kappa-
type statistics in the assessment of majority agreement among
multiple observers. Biometrics. 1977;33:363-74.
6 Landis JR, Koch GG. The measurement of observer
agreement for categorical data. Biometrics. 1977;33:159-74.
7 Rousson V, Gasser T, Seifert B. Assessing intrarater,
interrater and test-retest reliability of continuous
measurements. Stat Med. 2002;21:3431-46.
8 Shrout PE, Fleiss JL. Intraclass correlations: uses in
assessing rater reliability. Psychol Bull. 1979;86:420-8.
Showing Page:
3/3
9 Fleiss JL, Cohen J. Equivalence of Weighted Kappa and
Intraclass Correlation Coefficient as Measures of Reliability.
Educ Psychol Meas. 1973;33:613-9.
10 Fleiss JL, Shrout PE. Approximate Interval Estimation for
a Certain Intraclass Correlation-Coefficient. Psychometrika.
1978;43:259-62.
11 Bland JM, Altman DG. Statistical methods for assessing
agreement between two methods of clinical measurement. Int
J Nurs Stud. 2010;47:931-6.

Unformatted Attachment Preview

Question : How to manage Bland Altman analysis with more than one couple of measurements? I consider three readers circularly involved in Bland-Altman 2-by-2; threfore I obtained 3 Bland-Altman analysis with corresponding Limit of Agreement (and Confidence Interval). In order to get more accurate estimations, I would like to average the three Limit of Agreement but I am not sure this is correct. That's my first question. My second concern is about confidence interval when averaging standard deviation that themselves have CI. How to compute Confidence interval for averaged Standard Deviation (or Limit of Agreement)? Thanks for the help. Solition : Can you give us a bit more info on what you're trying to do? I'm not familiar with the expression "three readers circularly involved". What's your data type, number of measurements/raters and repeats? In general for stats of agreement: Cohen's kappa gives the agreement coefficient for nominal grades and each outcome is binary (agreement or not agreement) [1-3]. If you want to take into account the degree of agreement or disagreement you can use a weighted kappa (with linear or quadratic weights usually). Both kappas are corrected for chance [1]. For continuous variables you need either the intra-class correlation coefficient ICC [8] or Bland-Altman BA [7]. Note that for ordinal data the ICC and weighted kappa with quadratic weighting are comparable (and exactly equivalent for uniform marginal distributions [8,10]). BA plots is a graphical representation of the same concept, showing for two assessors (or groups of assessors) the difference for each assessment against the mean of each assessment [4, 11]. As such it shows any bias and the limits of agreement between two raters. 1 Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70:213-20. 2 Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:207-16. 3 Kraemer HC. Extension of the Kappa-Coefficient. Biometrics. 1980;36:207-16. 4 Bland JM, Altman DG. Statistical Methods for Assessing Agreement between Two Methods of Clinical Measurement. Lancet. 1986;1:307-10. 5 Landis JR, Koch GG. An application of hierarchical kappatype statistics in the assessment of majority agreement among multiple observers. Biometrics. 1977;33:363-74. 6 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-74. 7 Rousson V, Gasser T, Seifert B. Assessing intrarater, interrater and test-retest reliability of continuous measurements. Stat Med. 2002;21:3431-46. 8 Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86:420-8. 9 Fleiss JL, Cohen J. Equivalence of Weighted Kappa and Intraclass Correlation Coefficient as Measures of Reliability. Educ Psychol Meas. 1973;33:613-9. 10 Fleiss JL, Shrout PE. Approximate Interval Estimation for a Certain Intraclass Correlation-Coefficient. Psychometrika. 1978;43:259-62. 11 Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Int J Nurs Stud. 2010;47:931-6. Name: Description: ...
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.
Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4