首页> 外文会议>5th Annual IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems >Controlling the hidden layers' output to optimizing the training process in the Deep Neural Network algorithm
【24h】

Controlling the hidden layers' output to optimizing the training process in the Deep Neural Network algorithm

机译:在深度神经网络算法中控制隐藏层的输出以优化训练过程

获取原文
获取原文并翻译 | 示例

摘要

Deep learning is one of the most recent development form of Artificial Neural Network (ANN) in machine learning. Deep Neural Network (DNN) algorithm is usually used in image and speech recognition applications. As the development of Artificial Neural Network, very possible there are so many hidden layers in Deep Neural Network. In DNN, the output of each node is a quadratic function of its inputs. The DNN training process is very difficult. In this paper, we try to optimizing the training process by slightly construct of the deep architecture and combines several existing algorithms. Output's error of each unit in the previous layer will be calculated. The weight of the unit with the smallest error will be maintained in the next iteration. This paper uses MNIST handwriting images as its data training and data test. After doing some tests, it can be concluded that the optimization by selecting any output in each hidden layer, the DNN training process will be faster approximately 8%.
机译:深度学习是机器学习中人工神经网络(ANN)的最新发展形式之一。深度神经网络(DNN)算法通常用于图像和语音识别应用程序。随着人工神经网络的发展,深度神经网络中很可能存在如此众多的隐藏层。在DNN中,每个节点的输出是其输入的二次函数。 DNN训练过程非常困难。在本文中,我们尝试通过略微构建深度架构并结合几种现有算法来优化训练过程。将计算上一层中每个单元的输出误差。误差最小的单元的权重将在下一次迭代中保持。本文使用MNIST手写图像作为数据训练和数据测试。经过一些测试,可以得出结论,通过选择每个隐藏层中的任何输出进行优化,DNN训练过程将加快大约8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号