首页> 外文学位 >Architecture optimization, training convergence and network estimation robustness of a fully connected recurrent neural network.
【24h】

Architecture optimization, training convergence and network estimation robustness of a fully connected recurrent neural network.

机译:完全连接的递归神经网络的体系结构优化,训练收敛和网络估计的鲁棒性。

获取原文
获取原文并翻译 | 示例

摘要

Recurrent neural networks (RNN) have been rapidly developed in recent years. Applications of RNN can be found in system identification, optimization, image processing, pattern reorganization, classification, clustering, memory association, etc.;In this study, an optimized RNN is proposed to model nonlinear dynamical systems. A fully connected RNN is developed first which is modified from a fully forward connected neural network (FFCNN) by accommodating recurrent connections among its hidden neurons. In addition, a destructive structure optimization algorithm is applied and the extended Kalman filter (EKF) is adopted as a network's training algorithm. These two algorithms can seamlessly work together to generate the optimized RNN. The enhancement of the modeling performance of the optimized network comes from three parts: (1) its prototype - the FFCNN has advantages over multilayer perceptron network (MLP), the most widely used network, in terms of modeling accuracy and generalization ability; (2) the recurrency in RNN network make it more capable of modeling non-linear dynamical systems; and (3) the structure optimization algorithm further improves RNN's modeling performance in generalization ability and robustness.;Performance studies of the proposed network are highlighted in training convergence and robustness. For the training convergence study, the Lyapunov method is used to adapt some training parameters to guarantee the training convergence, while the maximum likelihood method is used to estimate some other parameters to accelerate the training process. In addition, robustness analysis is conducted to develop a robustness measure considering uncertainties propagation through RNN via unscented transform.;Two case studies, the modeling of a benchmark non-linear dynamical system and a tool wear progression in hard turning, are carried out to testify the development in this dissertation.;The work detailed in this dissertation focuses on the creation of: (1) a new method to prove/guarantee the training convergence of RNN, and (2) a new method to quantify the robustness of RNN using uncertainty propagation analysis. With the proposed study, RNN and related algorithms are developed to model nonlinear dynamical system which can benefit modeling applications such as the condition monitoring studies in terms of robustness and accuracy in the future.
机译:近年来,递归神经网络(RNN)迅速发展。 RNN在系统识别,优化,图像处理,模式重组,分类,聚类,内存关联等方面具有广泛的应用;本研究提出了一种优化的RNN对非线性动力学系统进行建模。首先开发了完全连接的RNN,它是通过容纳其隐藏的神经元之间的循环连接而从完全前向连接的神经网络(FFCNN)修改而来的。此外,应用了破坏性结构优化算法,并采用扩展卡尔曼滤波器(EKF)作为网络的训练算法。这两种算法可以无缝协作以生成优化的RNN。优化网络的建模性能的增强来自三个部分:(1)其原型-FFCNN在建模准确性和泛化能力方面优于多层感知器网络(MLP)(使用最广泛的网络); (2)RNN网络的递归使其更具有建模非线性动力系统的能力; (3)结构优化算法在泛化能力和鲁棒性方面进一步提高了RNN的建模性能。对于训练收敛性研究,使用Lyapunov方法调整一些训练参数以保证训练收敛性,而使用最大似然法估计其他一些参数以加快训练过程。此外,进行鲁棒性分析以开发考虑到通过无味变换通过RNN传播的不确定性的鲁棒性度量。;进行了两个案例研究,即基准非线性动力学系统的建模和硬车削中的刀具磨损进程,以证明本文的工作重点在于:(1)证明/保证RNN训练收敛性的新方法,和(2)利用不确定性量化RNN鲁棒性的新方法。传播分析。通过提出的研究,开发了RNN和相关算法来对非线性动力学系统进行建模,这可以从将来在鲁棒性和准确性方面使诸如状态监视研究之类的建模应用受益。

著录项

  • 作者

    Wang, Xiaoyu.;

  • 作者单位

    Clemson University.;

  • 授予单位 Clemson University.;
  • 学科 Engineering Mechanical.;Artificial Intelligence.;Computer Science.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 199 p.
  • 总页数 199
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号