首页> 外文学位 >Addressing power, performance and end-to-end QoS in emerging multicores through system-wide resource management.
【24h】

Addressing power, performance and end-to-end QoS in emerging multicores through system-wide resource management.

机译:通过系统范围的资源管理解决新兴多核中的功率,性能和端到端QoS。

获取原文
获取原文并翻译 | 示例

摘要

Multicores are now ubiquitous [1, 2, 3, 4], owing to the benefits they bring over single core architectures including improved performance, lower power consumption and reduced design complexity. Several resources ranging from the cores themselves to multiple levels of on-chip caches and off-chip memory bandwidth are typically shared in a multicore processor. Prudent management of these shared resources for achieving predictable performance and optimizing energy efficiency is critical and thus, has received considerable attention in recent times. In my research study, I have been focusing on proposing novel schemes to dynamically manage various available shared resources in emerging multiocres, while targeting three main goals: (1) Maximizing the overall system performance and (2) Meeting end-to-end QoS targets defined by the system administrator (3) Optimizing power and energy consumption. We consider a wide range of available resources including cors, shared caches, off-chip memory bandwidth, on-chip communication resources and power budgets. Further, towards achieving our goals, we employ formal control theory as a powerful tool to provide the high-level performance targets through dynamically managing and partitioning the shared resources.;Dynamic management of the shared resources in multicore systems with the goal of maximizing the overall system performance is another main part of this dissertation. As we move towards many-core systems, interference in shared cache continues to increase, making shared cache management an important issue in increasing overall system performance. As a part of my research, we propose a dynamic cache management scheme for multiprogrammed, multithreaded applications, with the objective of obtaining maximum performance for both individual applications and the multithreaded workload mix. From an architectural side, in parallel to increasing core counts, network-on-chip (NoC) is becoming one of the critical shared components which determine the overall performance, energy consumption and reliability of emerging multicore systems. In my research, targeting Network-on-Chip (NoC) based multicores, we propose two network prioritization schemes that can cooperatively improve performance by reducing end-to-end memory access latencies. In our another work on NoCs, focusing on a heterogenous NoC in which each router has (potentially) a different processing delay, we propose a process variation-aware source routing scheme to enhance the performance of the communications in the NoC based system. Our scheme assigns a route to each communication of an incoming application, considering the processing latencies of the routers resulting from process variation as well as the communications that already exist in the network to reduce traffic congestion.;Power and energy consumption in multicores is another important area that we target in my research. In one of the sections of this dissertation, targeting NoC based multicores, we propose a two-level power budget distribution mechanism, called PEPON, where the first level distributes the overall power budget of the multicore system among various types of on-chip resources like the cores, caches, and NoC, and the second level determines the allocation of power to individual instances of each type of resource. Both these distributions are oriented towards maximizing workload performance without exceeding the specified power budget.;As the memory system is a large contributor to the energy consumption of a server, there have been prior efforts to reduce the power and energy consumption of the memory system. DVFS schemes have been used to reduce the memory power, but they come with a performance penalty. In my research study, we propose HiPEMM, a high performance DVFS mechanism that intelligently reduces memory power by dynamically scaling individual memory channel frequencies. Our strategy also involves clustering the running applications based on their sensitivity to memory latency, and assigning memory channels to the application clusters.;Providing end-to-end QoS in future multicores is essential for supporting widespread adoption of multicore architectures in virtualized servers and cloud computing systems. An initial step towards such an end-to-end QoS support in multicores is to ensure that at least the major computational and memory resources on-chip are managed efficiently in a coordinated fashion. In this dissertation we propose a platform for end-to-end on-chip resource management in multicore processors. Assuming that each application specifies a performance target/SLA, the main objective is to dynamically provision sufficient on-chip resources to applications for achieving the specified targets. We employ a feedback based system, designed as a Single-Input, Multiple-Output (SIMO) controller with an Auto-Regressive- Moving-Average (ARMA) model, to capture the behaviors of different applications.
机译:由于多核带来了单核体系结构带来的好处,包括改进的性能,更低的功耗和降低的设计复杂性,因此现在无处不在[1、2、3、4]。多核处理器通常共享从内核本身到多级片上高速缓存和片外存储器带宽等多种资源。谨慎管理这些共享资源以实现可预测的性能和优化能源效率至关重要,因此,近来受到了相当大的关注。在我的研究中,我一直致力于提出新颖的方案来动态管理新兴多ococ中的各种可用共享资源,同时实现三个主要目标:(1)最大化整体系统性能;(2)达到端到端QoS目标由系统管理员定义(3)优化功耗和能耗。我们考虑了多种可用资源,包括核心资源,共享缓存,片外存储器带宽,片上通信资源和功率预算。此外,为了实现我们的目标,我们采用形式控制理论作为强大的工具,通过动态管理和划分共享资源来提供高级性能目标。多核系统中共享资源的动态管理,旨在最大程度地提高整体性能系统性能是本文的另一主要部分。随着我们迈向多核系统,共享缓存的干扰持续增加,这使得共享缓存管理成为提高整体系统性能的重要问题。作为我研究的一部分,我们提出了一种用于多程序,多线程应用程序的动态缓存管理方案,其目的是为单个应用程序和多线程工作负载混合都获得最佳性能。从体系结构的角度来看,与不断增加的核心数量同时,片上网络(NoC)成为决定新兴多核系统的整体性能,能耗和可靠性的关键共享组件之一。在我的研究中,针对基于片上网络(NoC)的多核,我们提出了两种网络优先级划分方案,它们可以通过减少端到端的内存访问延迟来协同提高性能。在我们关于NoC的另一项工作中,着眼于异构NoC,其中每个路由器(可能)具有不同的处理延迟,我们提出了一种感知过程变化的源路由方案,以增强基于NoC的系统中通信的性能。我们的方案考虑到路由器因过程变化以及网络中已经存在的通信而导致的处理延迟,从而为传入应用程序的每次通信分配了一条路由,以减少流量拥塞。多核中的功耗和能耗是另一个重要因素我们研究的目标领域。在本文的其中一个部分中,针对基于NoC的多核,我们提出了一种称为PEPON的两级功率预算分配机制,其中第一级在各种类型的片上资源之间分配多核系统的总体功率预算,例如核心,缓存和NoC,第二级确定为每种资源类型的各个实例分配电源。这两种分布都旨在使工作负载性能最大化而不超过指定的功率预算。由于存储系统是服务器能耗的重要贡献,因此人们一直在努力降低存储系统的功耗和能耗。 DVFS方案已被用来减少存储功率,但是它们会降低性能。在我的研究中,我们提出了HiPEMM,这是一种高性能DVFS机制,可通过动态缩放各个内存通道频率来智能地降低内存功耗。我们的策略还包括根据运行中的应用程序对内存延迟的敏感性对它们进行集群,并为应用程序集群分配内存通道。在未来的多核中提供端到端QoS对于支持虚拟化服务器和云中广泛采用多核架构至关重要计算系统。迈向多核中这种端到端QoS支持的第一步是确保以协调的方式至少有效地管理片上主要的计算和内存资源。本文提出了一种用于多核处理器端到端片上资源管理的平台。假设每个应用程序都指定了性能目标/ SLA,则主要目标是为应用程序动态提供足够的片上资源以实现指定的目标。我们采用基于反馈的系统,该系统设计为具有自动回归移动平均(ARMA)模型的单输入多输出(SIMO)控制器,以捕获不同应用程序的行为。

著录项

  • 作者

    Sharifi, Akbar.;

  • 作者单位

    The Pennsylvania State University.;

  • 授予单位 The Pennsylvania State University.;
  • 学科 Computer engineering.;Computer science.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 192 p.
  • 总页数 192
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号