Weighted Kappa

Cohen’s weighted kappa is broadly used in cross-classification as a measure of agreement between observed raters. It is an appropriate index of agreement when ratings are nominal scales with no order structure. The development of Cohen’s weighted kappa was motivated by the factor that some assignments in a contingency table might be of greater gravity than the others. The statistic relies on the predefined cell weights reflecting either agreement or disagreement.

The Weighted Kappa procedure provides options for estimating Cohen’s weighted kappa, an important generalization of the kappa statistic that measures the agreement of two ordinal subjects with identical categories.

Note: The Weighted Kappa procedure supersedes the functionality previously provided by the STATS WEIGHTED KAPPA.spe extension.
Example
There are situations where the differences between raters should not be treated as equally important. An example would be in the healthcare industry where multiple people collect research or clinical data. In such cases the reliability of the data can come into question given the variability among those collecting data.
Statistics
Cohen's weighted kappa, linear scale, quadratic scale, asymptotic confidence interval.

Weighted Kappa data considerations

Data
A two-way table that is based on an active data set is required in order to estimate the Cohen's weighted kappa statistic.
Rating variables must be of the same type (all string or all numeric).
The estimation of Cohen's weighted kappa makes sense only when the categories of the two rating variables, represented by the row and column in the table, are appropriately ordered (for a pair of numeric variables, numerical order is applied; for a pair of string variables, alphabetical order is applied).
Assumptions
When mixed variable pairs are selected, Cohen's weighted kappa is not estimated.
The rating variables are assumed to share the same set of categories.

To obtain a Weighted Kappa analysis

This feature requires the Statistics Base option.

  1. From the menus choose:

    Analyze > Scale > Weighted Kappa...

  2. Select two or more string or numeric variables to specify as Pairwise raters.
    Note: You must select either all string variables or all numeric variables.
  3. Optionally, enable the Specify raters for rows and columns setting to control the display of pairwise raters or rows/column raters.
    • When enabled, pairwise raters are suppressed and row/column raters display. The user interface updates to provide Row rater(s) and Column rater(s) fields (effectively replacing the Pairwise raters field.
    • When disabled, row/column raters are suppressed and pairwise raters display (the default setting)

    When Specify raters for rows and columns is enabled, specify at least one variable for both Row rater(s) and Column rater(s).

    Note: If both Row rater(s) and Column rater(s) contain only one variable, the selected variables cannot be the same for both.
  4. Optionally, click Criteria to specify the weighting scale and missing values settings, or Print to specify the display format and crosstabulation settings.

This procedure pastes WEIGHTED KAPPA command syntax.