首页> 外文会议>International Symposium on Pervasive Systems, Algorithms and Networks >A High Performance Framework for Large-Scale 2D Convolution Operation on FPGA
【24h】

A High Performance Framework for Large-Scale 2D Convolution Operation on FPGA

机译:用于FPGA的大型2D卷积操作的高性能框架

获取原文

摘要

Convolutional neural network (CNN), as the focus in the artificial intelligence field, has attracted more and more attention resent years. Various accelerators based on FPGA platform have been proposed because of the high performance and high energy efficiency. However, limited to the on-chip memory resources, only small networks can be accelerated by these accelerators. Due to the fact that 2D convolution operation is the most computing-intensive part, accelerating large-scale 2D convolution operation has become the key to accelerating large-scale CNNs on FPGA platforms. This paper presents a method of rotary data-storage and a design of the new PE (processing unit). Compared to the traditional Z-type PE, it performs non-redundant calculations when the stride of convolution operation is greater than 1, which significantly reduces the calculating time. At the same time, in order to reduce the external memory bandwidth with limited on-chip resources, two optimization architectures and a block-calculation method have been proposed, which can reduce the usage of on-chip memory resources. As a case study, we select some convolution layers of ResNet-34 and compare it with a previous accelerator. Under the condition of same consumption of resources, the result shows that the proposed framework needs only 29.9% time and 51% external memory bandwidth of the previous accelerator.
机译:作为人工智能领域的重点,卷积神经网络(CNN)吸引了越来越多的关注纪念岁月。由于高性能和高能量效率,已经提出了基于FPGA平台的各种加速器。但是,仅限于片上存储器资源,这些加速器只能加速小型网络。由于2D卷积操作是最多计算密集型部分,加速大规模的2D卷积操作已成为在FPGA平台上加速大规模CNN的关键。本文介绍了一种旋转数据存储和新PE(处理单元)的设计方法。与传统的Z型PE相比,当卷积操作的升序大于1时,它执行非冗余计算,这显着降低了计算时间。同时,为了减少具有有限的片上资源的外部存储器带宽,已经提出了两个优化架构和块计算方法,这可以减少片上内存资源的使用。作为案例研究,我们选择Reset-34的一些卷积层,并将其与先前的加速器进行比较。在相同消耗的资源的条件下,结果表明,所提出的框架仅需要29.9 %的时间和前一个加速器的外部内存带宽。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号