首页> 外文会议> >Stacked generalization in neural networks: generalization on statistically neutral problems
【24h】

Stacked generalization in neural networks: generalization on statistically neutral problems

机译:神经网络中的堆叠泛化:统计中性问题的泛化

获取原文

摘要

Generalization continues to be one of the most important topic in neural networks and other classifiers. In the last number of years, number of different methods have been developed to improve generalization accuracy. Any classifier that uses induction to find the class concept from the training patterns will have a hard time to achieve an acceptable level of generalization accuracy when the problem to be learned is a statistically neutral problem. A problem is statistically neutral if the probability of mapping an input onto an output is always the chance value of 0.5. We examine the generalization behaviour of multilayer neural networks on learning statistically neutral problems using single level learning models (e.g., conventional cross-validation scheme) as well as multiple level learning models (e.g., stacked generalization method). We show that for statistically neutral problems such as parity and majority function, the stacked generalization scheme improves classification performance and generalization accuracy over the single level cross-validation model.
机译:泛化仍然是神经网络和其他分类器中最重要的主题之一。在最近的几年中,已经开发了许多不同的方法来提高泛化精度。当要学习的问题在统计上是中立的问题时,使用归纳法从训练模式中找到分类概念的任何分类器都将很难达到可接受的泛化精度水平。如果将输入映射到输出的概率始终是0.5的机会值,则该问题在统计上是中性的。我们使用单级学习模型(例如,常规交叉验证方案)以及多级学习模型(例如,堆叠泛化方法)来研究多层神经网络在统计中立问题上的泛化行为。我们表明,对于统计中性问题,例如奇偶校验和多数函数,堆叠式归纳方案在单级交叉验证模型上提高了分类性能和归纳精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号