...
首页> 外文期刊>Neural Networks: The Official Journal of the International Neural Network Society >The No-Prop algorithm: a new learning algorithm for multilayer neural networks.
【24h】

The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

机译:No-Prop算法:一种用于多层神经网络的新学习算法。

获取原文
获取原文并翻译 | 示例
           

摘要

A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress.
机译:因此,我们介绍了一种新的多层神经网络学习算法,我们将其称为无传播(No-Prop)。使用此算法,可以设置隐藏层神经元的权重并将其固定为随机值。使用Widrow和Hoff的LMS算法,使用最速下降法训练输出层神经元的权重,以使均方误差最小。从最小均方误差容量(LMS Capacity)的角度检查了引入隐藏层非线性的目的,最小均方误差容量(LMS Capacity)定义为可在网络中以零误差训练的不同模式的最大数量。这表明等于每个输出层神经元的权重数。比较了No-Prop算法和Back-Prop算法。我们在No-Prop方面的经验是有限的,但是从这里介绍的几个示例来看,当训练模式的数量小于或等于LMS Capacity时,关于两种算法的训练和泛化的性能似乎基本相同。当训练模式的数量超过“容量”时,“后向支持”通常会表现得更好。但是,通过增加驱动输出层的隐藏层中神经元的数量来增加网络容量,可以使用No-Prop获得同等的性能。 No-Prop算法比Back-Prop更容易实现。而且,它收敛得更快。确切地说在哪里使用这些算法中的一种或另一种还为时过早。这项工作仍在进行中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号