In our example, cohen`s kappa (k) = 0.65, which is a just-to-good match force according to the classification Fleiss e al. (2003). This is confirmed by the received p-value (p < 0.05), which indicates that our calculated kappa was significantly zero. To explain how the observed and expected concordance is calculated, we look at the following contingency table. Two clinical psychologists were asked to diagnose whether or not 70 people suffer from depression. While most researchers have all reliability codes evaluated on the same set of units, some systematically assign sets of units to judges. We recommend the first approach because it provides the most effective basis for assessing the conformity of coders, and with others (z.B. Neuendorf, 2002) against the latter approach (fortunately, most indices can take into account missing data resulting from different programmers who evaluate different units for any reason). See Neuendorf (2002) and Potter and Levine-Donnerstein (1999) for discussion. Intercoder reliability is the widely used term for the extent to which independent programmers evaluate and conclude a feature of a message or artifact.
Although this term is appropriate in its generic use as a reference to the cosistance of the measure and is used here, Tinsley and Weiss (1975, 2000) find that the more specific term for the type of consistency required in content analysis is an intercoder (or interrater) concordance. They write that while reliability could be based on correlation indices (or analyses of variance) that assess the degree of “different judges` scoring when expressed as deviations from their means,” intercoder compliance is necessary in content analysis, as it only measures “the extent to which different judges tend to assign exactly the same rating to each object” (Tins & ley Weiss, 2000, p. 98); Even though the intercoder is used for variables at the interval or report levels of the measure, the actual match on the coded values (even if similar and not identical values “count”) is the basis of the evaluation. . . .