Inter-reader agreement in kidney mass scoring
Distribution of reader scores
The distributions of individual reader scores are plotted in the barcharts below.
Agreement between readers
The score differences between pairs of readers are plotted below. Reader agreement is assessed by the intra-class correlation coefficient (ICC) (Shrout and Fleiss 1979) along with its 95% confidence intervals.
It appears that ``Lee vs Toia’’ has the strongest agreement, whereas Robbins tends to give lower scores than the other two readers.
References
Shrout, Patrick E., and Joseph L. Fleiss. 1979. “Intraclass Correlations: Uses in Assessing Rater Reliability.” Psychological Bulletin 86 (2): 420–28. https://doi.org/10.1037/0033-2909.86.2.420.