【24h】

Learning Memory Access Patterns

机译:学习内存访问模式

获取原文
           

摘要

The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations; augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.
机译:工作量复杂性的爆炸与摩尔定律缩放呼吁的最近减速,以了解高效计算的新方法。研究人员现在开始在软件优化中使用最近的机器学习进步;增强或取代传统的启发式和数据结构。然而,计算机硬件架构的机器学习空间仅探索。在本文中,我们展示了深度学习的潜力,以解决内存性能的von neumann瓶颈。我们专注于学习内存访问模式的关键问题,其目标是构建准确和高效的内存预取器。我们将当代预取策略与自然语言处理中的N-GRAM模型相关联,并展示了经常性的神经网络如何作为替代品的替代品。在一套具有挑战性的基准数据集上,我们发现神经网络在精确和召回方面一直展示出色的性能。这项工作代表了基于实际神经网络的预取的第一步,并为计算机架构研究中的机器学习打开了广泛的令人兴奋的方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号