...
首页> 外文期刊>ACM Transactions on Embedded Computing Systems >Dynamic Scratchpad Memory Management for Code in Portable Systems with an MMU
【24h】

Dynamic Scratchpad Memory Management for Code in Portable Systems with an MMU

机译:具有MMU的便携式系统中代码的动态Scratchpad内存管理

获取原文
获取原文并翻译 | 示例
           

摘要

In this work, we present a dynamic memory allocation technique for a novel, horizontally partitioned memory subsystem targeting contemporary embedded processors with a memory management unit (MMU). We propose to replace the on-chip instruction cache with a scratchpad memory (SPM) and a small minicache. Serializing the address translation with the actual memory access enables the memory system to access either only the SPM or the minicache. Independent of the SPM size and based solely on profiling information, a postpass optimizer classifies the code of an application binary into a pageable and a cacheable code region. The latter is placed at a fixed location in the external memory and cached by the minicache. The former, the pageable code region, is copied on demand to the SPM before execution. Both the pageable code region and the SPM are logically divided into pages the size of an MMU memory page. Using the MMU's pagefault exception mechanism, a runtime scratchpad memory manager (SPMM) tracks page accesses and copies frequently executed code pages to the SPM before they get executed. In order to minimize the number of page transfers from the external memory to the SPM, good code placement techniques become more important with increasing sizes of the MMU pages. We discuss code-grouping techniques and provide an analysis of the effect of the MMU's page size on execution time, energy consumption, and external memory accesses. We show that by using the data cache as a victim buffer for the SPM, significant energy savings are possible. We evaluate our SPM allocation strategy with fifteen applications, including H.264, MP3, MPEG-4, and PGP. The proposed memory system requires 8% less die are compared to a fully-cached configuration. On average, we achieve a 31% improvement in runtime performance and a 35% reduction in energy consumption with an MMU page size of 256 bytes.
机译:在这项工作中,我们提出了一种新颖的,水平分区的内存子系统的动态内存分配技术,该子系统针对具有内存管理单元(MMU)的当代嵌入式处理器。我们建议用暂存器(SPM)和小型minicache代替片上指令高速缓存。使用实际的内存访问序列化地址转换,使内存系统只能访问SPM或小型高速缓存。与SPM大小无关,并且仅基于分析信息,后传递优化器将应用程序二进制代码分类为可分页和可缓存的代码区域。后者放置在外部存储器中的固定位置,并由minicache缓存。前者(可分页的代码区域)在执行前按需复制到SPM。可分页代码区域和SPM都在逻辑上分为MMU内存页大小的页面。使用MMU的pagefault异常机制,运行时暂存器内存管理器(SPMM)跟踪页面访问,并在执行频繁的代码页之前将其复制到SPM。为了最小化从外部存储器到SPM的页面传输次数,随着MMU页面尺寸的增加,良好的代码放置技术变得越来越重要。我们讨论代码分组技术,并提供MMU页面大小对执行时间,能耗和外部存储器访问的影响的分析。我们表明,通过将数据缓存用作SPM的受害者缓冲区,可以节省大量能源。我们通过15种应用程序(包括H.264,MP3,MPEG-4和PGP)评估我们的SPM分配策略。与完全缓存的配置相比,建议的内存系统所需的管芯减少了8%。平均而言,使用256字节的MMU页面大小,我们的运行时性能提高了31%,能耗降低了35%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号