首页> 外文会议>IEEE International Conference on Computer and Communications >Multiplying Elimination of 8-bit Low-Precision Neural Networks Exploiting Weight and Activation Repetition
【24h】

Multiplying Elimination of 8-bit Low-Precision Neural Networks Exploiting Weight and Activation Repetition

机译:利用权重和激活重复的8位低精度神经网络的乘法消除

获取原文

摘要

Convolutiona1 neural networks (CNNs) have been applied to various applications, such as image recognition and speech recognition, and have even achieved higher prediction accuracy than human eyes. However, the computation complexity of CNN increases rapidly with the increase of network scale, and a huge number of multiplying and accumulating operations may involve for a moderate CNN instance. This may significantly prolong the training and inference process and incur large energy consumption. Thus, a few low-precision CNN acceleration methods are presented to reduce the time complexity with the price of reduced computational accuracy. But they could not essentially reduce the number of calculations. In this backdrop, this paper proposes a table lookup-based multiplication elimination method for low-precision CNNs exploiting their weight and activation repetition. In our method, a table is established in advance to store all the possible multiplying results, and a simple table lookup operation is triggered every time a multiply calculation is encountered. Analysis results have shown that our proposal can greatly reduce the computational time, memory requirement, and energy consumption of low-precision CNNs.
机译:卷积神经网络(CNN)已应用于各种应用,例如图像识别和语音识别,并且甚至比人眼获得了更高的预测精度。但是,随着网络规模的扩大,CNN的计算复杂度迅速增加,对于中等程度的CNN实例,可能涉及大量的乘法和累加运算。这可能会大大延长训练和推理过程,并导致大量能源消耗。因此,提出了几种低精度的CNN加速方法,以降低时间复杂度,但代价是降低了计算精度。但是他们根本无法减少计算数量。在此背景下,本文提出了一种基于表格查找的低精度CNN利用其权重和激活重复的乘法消除方法。在我们的方法中,预先建立了一个表来存储所有可能的乘法结果,并且每次遇到乘法计算时都会触发一个简单的表查找操作。分析结果表明,我们的建议可以大大减少低精度CNN的计算时间,内存需求和能耗。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号