...
首页> 外文期刊>Integration >Normalization and dropout for stochastic computing-based deep convolutional neural networks
【24h】

Normalization and dropout for stochastic computing-based deep convolutional neural networks

机译:基于随机计算的深度卷积神经网络的归一化和辍学

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, Deep Convolutional Neural Network (DCNN) has been recognized as the most effective model for pattern recognition and classification tasks. With the fast growing Internet of Things (IoTs) and wearable devices, it becomes attractive to implement DCNNs in embedded and portable systems. However, novel computing paradigms are urgently required to deploy DCNNs that have huge power consumptions and complex topologies in systems with limited area and power supply. Recent works have demonstrated that Stochastic Computing (SC) can radically simplify the hardware implementation of arithmetic units and has the potential to bring the success of DCNNs to embedded systems. This paper introduces normalization and dropout, which are essential techniques for the state-of-the-art DCNNs, to the existing SC-based DCNN frameworks. In this work, the feature extraction block of DCNNs is implemented using an approximate parallel counter, a near-max pooling block and an SC-based rectified linear activation unit. A novel SC-based normalization design is proposed, which includes a square and summation unit, an activation unit and a division unit. The dropout technique is integrated into the training phase and the learned weights are adjusted during the hardware implementation. Experimental results on AlexNet with the ImageNet dataset show that the SC-based DCNN with the proposed normalization and dropout techniques achieves 3.26% top-1 accuracy improvement and 3.05% top-5 accuracy improvement compared with the SC-based DCNN without these two essential techniques, confirming the effectiveness of our normalization and dropout designs.
机译:最近,深度卷积神经网络(DCNN)被公认为是模式识别和分类任务的最有效模型。随着物联网(IoT)和可穿戴设备的快速发展,在嵌入式和便携式系统中实现DCNN变得越来越有吸引力。但是,迫切需要新颖的计算范例,以在具有有限面积和电源的系统中部署具有巨大功耗和复杂拓扑的DCNN。最近的工作表明,随机计算(SC)可以从根本上简化算术单元的硬件实现,并具有将DCNN的成功带入嵌入式系统的潜力。本文向现有的基于SC的DCNN框架介绍归一化和辍学,这是最先进的DCNN的基本技术。在这项工作中,DCNN的特征提取模块是使用近似并行计数器,近最大合并模块和基于SC的整流线性激活单元实现的。提出了一种新颖的基于SC的归一化设计,该设计包括平方和求和单元,激活单元和除法单元。辍学技术已集成到训练阶段,并且在硬件实施过程中调整了学习的权重。使用ImageNet数据集在AlexNet上进行的实验结果表明,与不使用这两项必不可少的技术的基于SC的DCNN相比,具有建议的归一化和丢包技术的基于SC的DCNN可以实现3.26%的top-1精度提高和3.05%的top-5精度提高。 ,确认我们的标准化和辍学设计的有效性。

著录项

  • 来源
    《Integration》 |2019年第3期|395-403|共9页
  • 作者单位

    Univ Southern Calif, Dept Elect Engn, Los Angeles, CA 90089 USA;

    Univ Southern Calif, Dept Elect Engn, Los Angeles, CA 90089 USA;

    Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13210 USA;

    Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13210 USA;

    Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13210 USA;

    Univ Southern Calif, Dept Elect Engn, Los Angeles, CA 90089 USA|Univ Southern Calif, Informat Sci Inst, Marina Del Rey, CA 90292 USA;

    Univ Southern Calif, Dept Elect Engn, Los Angeles, CA 90089 USA;

    Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13210 USA;

    CUNY City Coll, Dept Elect Engn, New York, NY 10031 USA;

    Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse, NY 13210 USA;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Deep learning; Deep convolutional neural networks; Dropout; Normalization;

    机译:深度学习;深度卷积神经网络;降落;归一化;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号