【24h】

Bounds on learning in polynomial time

机译:多项式学习的界限

获取原文
获取原文并翻译 | 示例
           

摘要

The performance of large neural networks can be judged not only by their storage capacity but also by the time required for learning. A polynomial learning algorithm with learning time proportional to N-2 in a network with N units might be practical whereas a learning time proportional to exp N would allow rather small networks only. The question of absolute storage capacity alpha(c) and capacity for polynomial learning rules alpha(p) is discussed for several feedforward architectures, the perception, the binary perceptron, the committee machine and a perceptron with fixed weights in the first layer and adaptive weights in the second layer. The analysis is based partially on dynamic mean-field theory which is valid for N --> infinity. In particular, for the committee machine a value alpha(p) considerably lower than the capacity predicted by replica theory or simulations is found. This discrepancy is resolved by new simulations investigating the learning time dependence and revealing subtleties in the definition of the capacity. [References: 14]
机译:大型神经网络的性能不仅可以通过它们的存储容量来判断,还可以通过学习所需的时间来判断。在具有N个单位的网络中,学习时间与N-2成正比的多项式学习算法可能是实用的,而与exp N成比例的学习时间将仅允许较小的网络。讨论了几种前馈体系结构,感知,二进制感知器,委员会机器和第一层中具有固定权重和自适应权重的感知器的绝对存储容量alpha(c)和多项式学习规则alpha(p)的容量问题。在第二层。该分析部分基于动态平均场理论,该理论对N->无穷大有效。特别是,对于委员会机器,发现的值alpha(p)大大低于复制理论或模拟所预测的容量。通过研究学习时间依赖性并揭示能力定义中的细微差别的新模拟解决了这种差异。 [参考:14]

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号