首页> 外文会议>International conference on neural information processing >FPGA Implementation of Autoencoders Having Shared Synapse Architecture
【24h】

FPGA Implementation of Autoencoders Having Shared Synapse Architecture

机译:具有共享突触架构的自动编码器的FPGA实现

获取原文

摘要

Deep neural networks (DNNs) are a state-of-the-art processing model in the field of machine learning. Implementation of DNNs into embedded systems is required to realize artificial intelligence on robots and automobiles. Embedded systems demand great processing speed and low power consumption, and DNNs require considerable processing resources. A field-programmable gate array (FPGA) is one of the most suitable devices for embedded systems because of their low power consumption, high speed processing, and reconfigurability. Autoencoders (AEs) are key parts of DNNs and comprise an input, a hidden, and an output layer. In this paper, we propose a novel hardware implementation of AEs having shared synapse architecture. In the proposed architecture, the value of each weight is shared in two interlayers between input-hidden layer and hidden-output layer. This architecture saves the limited resources of an FPGA, allowing a reduction of the synapse modules by half. Experimental results show that the proposed design can reconstruct input data and be stacked. Compared with the related works, the proposed design is register transfer level description, synthesizable, and estimated to decrease total processing time.
机译:深度神经网络(DNN)是机器学习领域中最先进的处理模型。为了在机器人和汽车上实现人工智能,需要在嵌入式系统中实现DNN。嵌入式系统需要很高的处理速度和较低的功耗,而DNN则需要大量的处理资源。现场可编程门阵列(FPGA)由于其低功耗,高速处理和可重新配置性而成为最适合嵌入式系统的设备之一。自动编码器(AE)是DNN的关键部分,包括输入层,隐藏层和输出层。在本文中,我们提出了一种具有共享突触体系结构的AE的新颖硬件实现。在提出的体系结构中,每个权重的值在输入隐藏层和隐藏输出层之间的两个中间层中共享。这种架构节省了FPGA的有限资源,从而使突触模块减少了一半。实验结果表明,所提出的设计可以重构输入数据并进行堆叠。与相关工作相比,提出的设计是寄存器传输级别描述,可综合和估计以减少总处理时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号