...
首页> 外文期刊>Multivariate behavioral research >A meta-meta-analysis: Empirical review of statistical power, type I error rates, effect sizes, and model selection of meta-analyses published in psychology
【24h】

A meta-meta-analysis: Empirical review of statistical power, type I error rates, effect sizes, and model selection of meta-analyses published in psychology

机译:荟萃分析:对心理学发表的荟萃分析的统计功效,I类错误率,效应大小和模型选择进行的经验综述

获取原文
获取原文并翻译 | 示例
           

摘要

This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohen's (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birge's (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta-analyses in light of the findings are provided.
机译:本文使用1995年至2005年发表在《心理学公报》上的荟萃分析来描述心理学的荟萃分析,包括检验统计功效,多次比较导致的I类错误以及模型选择。回顾性功效估计表明,单变量类别和连续主持人,多元分析中的单个主持人以及类别主持人的各个级别内的残差变异性测试具有最低和最相关的级别。使用前瞻性计算功效的方法进行荟萃分析中的显着性检验,我们说明功效如何随效应大小数,每个效应大小的平均样本大小,效应大小大小以及效应大小的异质性而变化。在大多数荟萃分析中,进行了许多显着性检验,导致估计出I型错误的可能性相当大,尤其是对于主持人,单变量分类主持人级别内的均值检验以及主持人各个级别内的剩余变异性。在所有调查的研究中,中度效应量和两个水平的研究水平调节剂之间的中度差异均小于Cohen(1988)的关于中等效应量的相关性或两个相关性之间差异的惯例。中位数比尔盖(Birge's)(1932)比率大于Hedges和Pigott(2001)提出的中等异质性惯例,并表明典型的荟萃分析显示出潜在效应的变异性远远超出了仅凭抽样误差所预期的变异性。与随机效应模型相比,固定效应模型的使用频率更高。但是,随机效应模型的使用频率随时间增加。与本研究的模型选择有关的结果与Schmidt,Oh和Hayes(2009)的研究结果进行了仔细比较,后者独立设计并进行了与本文报道的研究相似的研究。提供了根据发现结果进行未来荟萃分析的建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号