...
首页> 外文期刊>計測自動制御学会論文集 >Complexity control methods of dynamics in recurrent neural networks
【24h】

Complexity control methods of dynamics in recurrent neural networks

机译:经常性神经网络中动力学的复杂性控制方法

获取原文
获取原文并翻译 | 示例
           

摘要

The paper demonstrates that a complexity of dynamics in recurrent networks with N neurons can be controlled by our gradient methods. The complexity, i.e. the Lyapunov exponent, is calculated by observing the state transition for a long-term period T. One of the control methods is based on the conventional learning algorithms for the recurrent networks. This is the method with high-precision, but it requires O(N{sup}5T) expected time. To reduce the expensive run time, we propose another method based on the approximate relation between the complexity and a new parameter of the network configuration reported in our previous papers. This approximation requires only O(N{sup}2) run time. Simulation results show that the first method can control the exponent and that the approximation one can control the exponent under a restriction. The networks can learn not only the target time series, but also the exponent of the target by a combination method which is incorporated the proposed control method into the conventional learning algorithm.
机译:本文表明,通过我们的梯度方法可以控制与N神经元的经常性网络中动态的复杂性。通过观察长期时段T的状态转换来计算复杂性,即Lyapunov指数。其中一个控制方法基于经常性网络的传统学习算法。这是具有高精度的方法,但它需要o(n {sup} 5t)预期时间。为了减少昂贵的运行时间,我们提出了另一种方法,基于我们之前报告的网络配置的复杂性和新参数之间的近似关系。此近似仅需要O(n {sup} 2)运行时。仿真结果表明,第一种方法可以控制指数,并且近似值可以在限制下控制指数。该网络不仅可以通过组合方法结合到传统学习算法中的组合方法来学习目标时间序列,还可以通过将所提出的控制方法结合到传统学习算法中的组合方法来学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号