Home|Journals|Articles by Year|Audio Abstracts
 

Technical Note

PBS. 2015; 5(3): 142-4


Kappa test

Selim Kılıç.




Abstract

Kappa coefficient is a statistic which measures inter-rater agreement for categorical items. It is generally thought to be a more robust measure than simple percent agreement calculation, since ? takes into account the agreement occurring by chance. Cohen’s kappa measures agreement between two raters only but Fleiss’ kappa is used when there are more than two raters. ? may have a value between -1 and +1. A value of kappa equal to +1 implies perfect agreement between the two raters, while that of -1 implies perfect disagreement. If kappa assumes the value 0, then this implies that there is no relationship between the ratings of the two observers, and any agreement or disagreement is due to chance alone.

Key words: observer, agreement, due to chance






Full-text options


Share this Article


Online Article Submission
• ejmanager.com




ejPort - eJManager.com
Refer & Earn
JournalList
About BiblioMed
License Information
Terms & Conditions
Privacy Policy
Contact Us

The articles in Bibliomed are open access articles licensed under Creative Commons Attribution 4.0 International License (CC BY), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.