Image for Kappa

Kappa

Kappa, or Cohen's Kappa, is a statistical measure used to assess how much two or more observers agree when categorizing or labeling items, beyond what would be expected by chance. It provides a score from -1 to 1: a value close to 1 indicates strong agreement, 0 suggests agreement is no better than chance, and a negative value indicates agreement is worse than chance. Kappa helps researchers determine the reliability and consistency of subjective evaluations, ensuring that the agreement between observers is meaningful and not just coincidental.