首页> 外文期刊>The journal of alternative and complementary medicine: research on paradigm, practice, and policy >Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement
【24h】

Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement

机译:确定机会删除的争论者协议的适当统计数据

获取原文
获取原文并翻译 | 示例
       

摘要

Objectives: Fleiss' Kappa (FK) has been commonly, but incorrectly, employed as the standard for evaluating chance-removed inter-rater agreement with ordinal data. This practice may lead to misleading conclusions in inter-rater agreement research. An example is presented that demonstrates the conditions where FK produces inappropriate results, compared with Gwet's AC2, which is proposed as a more appropriate statistic. A novel format for recording a Chinese Medical (CM) diagnoses, called the Diagnostic System of Oriental Medicine (DSOM), was used to record and compare patient diagnostic data, which, unlike the contemporary CM diagnostic format, allows agreement by chance to be considered when evaluating patient data obtained with unrestricted diagnostic options available to diagnosticians. Design: Five CM practitioners diagnosed 42 subjects drawn from an open population. Subjects' diagnoses were recorded using the DSOM format. All the available data were initially used to evaluate agreement. Then, the subjects were sorted into three groups to demonstrate the effects of differing data marginality on the calculated chance-removed agreement. Outcome measures: Agreement between the practitioners for each subject was evaluated with linearly weighted simple agreement, FK and Gwet's AC2. Results and Conclusions: In all cases, overall agreement was much lower with FK than Gwet's AC2. Larger differences occurred when the data were more free marginal. Inter-rater agreement determined with FK statistics is unlikely to be correct unless it can be shown that the data from which agreement is determined are, in fact, fixed marginal. It follows that results obtained on agreement between practitioners with FK are probably incorrect. It is shown that inter-rater agreement evaluated with AC2 statistic is an appropriate measure when fixed marginal data are neither expected nor guaranteed. The AC2 statistic should be used as the standard statistical approach for determining agreement between practitioners.
机译:目的:Fleiss'Kappa(FK)通常但不正确,作为评估机会与序数数据的机会互相协议的标准。这种做法可能导致评估间协定研究中的误导性结论。提出了一个例子,其演示FK产生不恰当的结果的条件,与GWET的AC2相比,这被提出为更适当的统计数据。用于记录中文医疗(CM)诊断的新格式,称为东方医学(DSOM)的诊断系统,用于记录和比较患者诊断数据,这与当代CM诊断格式不同,允许偶然考虑协议在评估使用无限制的诊断选项获得的患者数据时,可用于诊断人员。设计:五厘米从业者诊断出从开放人口中汲取的42名科目。使用DSOM格式记录受试者的诊断。所有可用数据最初用于评估协议。然后,将受试者分为三组,以证明不同数据边缘性对计算的机会的协议的影响。结果措施:使用线性加权简单协议,FK和GWET的AC2评估每个课题的从业者之间的协议。结果与结论:在所有情况下,总协议比GWET的AC2更低。当数据更加自由边际时,发生了更大的差异。除非证明确定协议的数据实际上,否则使用FK统计数据确定的帧间协议不太可能是正确的,实际上是固定边际。它遵循的是,与FK的从业者之间获得的结果可能不正确。结果表明,使用AC2统计数据评估的帧间协议是当预期的预期或保证的固定边际数据时的适当措施。 AC2统计数据应作为确定从业者之间协议的标准统计方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号