首页> 外文会议>IEEE International Solid- State Circuits Conference >16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications
【24h】

16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications

机译:16.4基于89TOP / W和16.3TOPS / MM2基于ALL-DIGITAL SRAM的全精度计算在22nm中的存储宏,适用于机器学习边缘应用

获取原文

摘要

From the cloud to edge devices, artificial intelligence (AI) and machine learning (ML) are widely used in many cognitive tasks, such as image classification and speech recognition. In recent years, research on hardware accelerators for AI edge devices has received more attention, mainly due to the advantages of AI at the edge: including privacy, low latency, and more reliable and effective use of network bandwidth. However, traditional computing architectures (such as CPUs, GPUs, FPGAs, and even existing AI accelerator ASICs) cannot meet the future needs of energy-constrained AI edge applications. This is because ML computing is data-centric, most of the energy in these architectures is consumed by memory accesses. In order to improve energy efficiency, both academia and industry are exploring a new computing architecture, namely compute in memory (CIM). CIM research is focused on a more analog approach with high-energy efficiency; however, lack of accuracy, due to a low SNR, is the main disadvantage; therefore, an analog approach may not be suitable for some applications that require high accuracy.
机译:从云到边缘设备,人工智能(AI)和机器学习(ML)广泛用于许多认知任务,例如图像分类和语音识别。近年来,对AI Edge设备的硬件加速器的研究受到了更多的关注,主要是由于AI在边缘的优势:包括隐私,低延迟,更可靠,更可靠地使用网络带宽。但是,传统的计算架构(例如CPU,GPU,FPGA,甚至现有AI加速器ASIC)无法满足能量受限AI EDGE应用的未来需求。这是因为ML计算是以数据为中心的,这些架构中的大多数能量被存储器访问消耗。为了提高能源效率,学术界和工业均正在探索新的计算架构,即在内存(CIM)中计算。 CIM研究专注于更高的模拟方法,高能量效率;然而,由于SNR低,缺乏准确性,是主要缺点;因此,模拟方法可能不适合需要高精度的一些应用。
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号