{"id":164,"date":"2020-06-17T19:31:22","date_gmt":"2020-06-18T00:31:22","guid":{"rendered":"https:\/\/apex.lmc.gatech.edu\/?page_id=164"},"modified":"2022-09-30T10:13:40","modified_gmt":"2022-09-30T15:13:40","slug":"inter-rater-reliability","status":"publish","type":"page","link":"https:\/\/apex.lmc.gatech.edu\/?page_id=164","title":{"rendered":"Inter-Rater Reliability"},"content":{"rendered":"<div class=\"entry-content\" itemprop=\"text\">\n<p>Establishing inter-rater reliability is important in order to ensure that multiple analysts deliver consistent results when analyzing the same data. We recommend that two analysts each analyze a portion of the recorded videos and calculate an inter-rater reliability score using that data. We used [1]&#8217;s method for calculating minimum sample size needed to establish reliability. We use Gwet\u2019s AC1 statistic [2] to calculate inter-rater reliability, due to a recognized issue with Cohen\u2019s Kappa when it is calculated for data in which certain events are rare (e.g. codes like discord or positive\/negative emotion) [3]. The AC1 statistic is an alternative to Cohen&#8217;s Kappa that corrects for this issue while still accounting for chance agreement [2].<\/p>\n<p>&nbsp;<\/p>\n<h4>References<\/h4>\n<div class=\"csl-bib-body\">\n<div class=\"csl-entry\">\n<ol>\n<li class=\"csl-right-inline\">Stephen Lacy and Daniel Riffe. 1996. Sampling error and selecting intercoder reliability samples for nominal content categories. <i>Journalism &amp; Mass Communication Quarterly<\/i> 73, 4: 963\u2013973.<\/li>\n<li class=\"csl-right-inline\">Kilem Li Gwet. 2008. Computing inter-rater reliability and its variance in the presence of high agreement. <i>British Journal of Mathematical and Statistical Psychology<\/i> 61, 1: 29\u201348.<\/li>\n<li class=\"csl-bib-body\">\n<div class=\"csl-entry\">\n<div class=\"csl-right-inline\">Anthony J Viera, Joanne M Garrett, and others. 2005. Understanding interobserver agreement: the kappa statistic. <i>Fam Med<\/i> 37, 5: 360\u2013363.<\/div>\n<\/div>\n<\/li>\n<\/ol>\n<\/div>\n<\/div>\n\n\n<\/div>\n","protected":false},"excerpt":{"rendered":"<div class=\"entry-summary\" itemprop=\"text\">\n<p>Establishing inter-rater reliability is important in order to ensure that multiple analysts deliver consistent results when analyzing the same data. We recommend that two analysts each analyze a portion of the recorded videos and calculate an inter-rater reliability score using that data. We used [1]&#8217;s method for calculating minimum sample size needed to establish reliability. &#8230;<\/p>\n\n<\/div>\n","protected":false},"author":354,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-164","page","type-page","status-publish","hentry","entry-lead"],"_links":{"self":[{"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/pages\/164","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/users\/354"}],"replies":[{"embeddable":true,"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=164"}],"version-history":[{"count":6,"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/pages\/164\/revisions"}],"predecessor-version":[{"id":312,"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=\/wp\/v2\/pages\/164\/revisions\/312"}],"wp:attachment":[{"href":"https:\/\/apex.lmc.gatech.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=164"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}