Title: Agreement Between Two Ratings with Different Ordinal Scales
Abstract: Agreement studies, where several observers may be rating the same subject for some characteristic measured on an ordinal scale, provide important information. The weighted Kappa coefficient is a popular measure of agreement for ordinal ratings. However, in some studies, the raters use scales with different numbers of categories. For example, a patient quality of life questionnaire may ask 'How do you feel today?' with possible answers ranging from 1 (worst) to 7 (best). At the same visit, the doctor reports his impression of the patient's health status as very poor, poor, fair, good, or very good. The weighted Kappa coefficient is not applicable here because the two scales have a different number of categories. In this paper, we discuss Kappa coefficients to measure agreement between such ratings. In particular, with R categories of one rating, and C categories of another, by dichotomizing the two ratings at all possible cutpoints, there are (R−1)(C−1) possible (2×2) tables. For each of these (2×2) tables, we estimate the Kappa coefficient for dichotomous ratings. The largest estimated Kappa coefficients suggest the cutpoints for the two ratings where agreement is the highest and where categories can be combined for further analysis.
Publication Year: 2007
Publication Date: 2007-08-07
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
Cited By Count: 4
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot