【24h】

Structure-based neural network learning

机译:基于结构的神经网络学习

获取原文
获取原文并翻译 | 示例
           

摘要

We present a new learning algorithm for the structure of recurrent neural networks. It is shown that any m linearly independent n-dimensional vectors can be stored in at most (n+m-2)-dimensional symmetric network. A storage procedure which satisfies this bound is presented. We propose a new learning procedure for the domain of attraction which preserves both the equilibrium set and the stability property of the original system. It is shown that previously learned attraction regions remain invariant under the proposed learning rule, Our emphasis throughout this brief is on the design of associative memories and classifiers
机译:我们为递归神经网络的结构提出了一种新的学习算法。结果表明,任意m个线性独立的n维向量都可以存储在最多(n + m-2)维对称网络中。提出了满足该限制的存储过程。我们为吸引域提出了一种新的学习过程,该过程既保留了原始系统的平衡集又具有稳定性。结果表明,在拟议的学习规则下,先前学习的吸引力区域保持不变。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号