14.05.2019 12:37

newly published article on agreement coefficients by Daniel Klein

Klein, Daniel (2019): Implementing a General Framework for Assessing Interrater Agreement in Stata. In: The Stata Journal: Promoting communications on statistics and Stata 18 (4), S. 871–901.

Despite its well-known weaknesses, researchers continuously choose the kappa coefficient (Cohen, 1960, Educational and Psychological Measurement 20: 37–46; Fleiss, 1971, Psychological Bulletin 76: 378–382) to quantify agreement among raters. Part of kappa's persistent popularity seems to arise from a lack of available alternative agreement coefficients in statistical software packages such as Stata. In this article, I review Gwet’s (2014, Handbook of Inter-Rater Reliability) recently developed framework of interrater agreement coefficients. This framework extends several agreement coefficients to handle any number of raters, any number of rating categories, any level of measurement, and missing values. I introduce the kappaetc command, which implements this framework in Stata.