...
首页> 外文期刊>IEICE Electronics Express >Optimization and stabilization of sequential learning in RBF network for nonlinear function approximation
【24h】

Optimization and stabilization of sequential learning in RBF network for nonlinear function approximation

机译:非线性函数逼近的RBF网络顺序学习的优化和稳定性

获取原文
           

摘要

References(6) This paper proposes a solution for inconsistency pruning of neurons within a sequential learning Radial Basis Function (RBF) Network. This paper adopts the concept that a specific RBF neuron which continuously exhibits low output in a sequence of training patterns does not justify the proposition that the neuron is insignificant to the whole function to be approximated. We establish additional criterions to provide protection from error in pruning RBF neurons within the hidden layer, which we prove is able to improve consistency and stability of neuron evolution. With such stability within the sequential learning process, we also show how the convergence speed of the network can be improved by reducing the number of consecutive observations required to prune a neuron in the hidden layer.
机译:参考文献(6)本文提出了一种在顺序学习径向基函数(RBF)网络中对神经元进行不一致修剪的解决方案。本文采用这样的概念,即在训练模式序列中连续显示低输出的特定RBF神经元不能证明该神经元对要近似的整个功能无关紧要。我们建立了附加的标准,以提供保护,以防止隐藏层中修剪RBF神经元时出现错误,这证明我们能够提高神经元进化的一致性和稳定性。通过在顺序学习过程中保持这种稳定性,我们还展示了如何通过减少修剪隐藏层中的神经元所需的连续观察次数来提高网络的收敛速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号