...
首页> 外文期刊>Theory of computing systems >Parallelism versus Memory Allocation in Pipelined Router Forwarding Engines
【24h】

Parallelism versus Memory Allocation in Pipelined Router Forwarding Engines

机译:流水线路由器转发引擎中的并行性与内存分配

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

A crucial problem that needs to be solved is the allocation of memory to processors in a pipeline. Ideally, the processor memories should be totally separate (i.e., one-port memories) in order to minimize contention; however, this minimizes memory sharing. Idealized sharing occurs by using a single shared memory for all processors but this maximizes contention. Instead, in this paper we show that perfect memory sharing of shared memory can be achieved with a collection of two-port memories, as long as the number of processors is less than the number of memories. We show that the problem of allocation is NP-complete in general, but has a fast approximation algorithm that comes within a factor of 3/2 asymptotically. The proof utilizes a new bin packing model, which is interesting in its own right. Further, for important special cases that arise in practice a more sophisticated modification of this approximation algorithm is in fact optimal. We also discuss the online memory allocation problem and present fast online algorithms that provide good memory utilization while allowing fast updates.
机译:需要解决的关键问题是将内存分配给管道中的处理器。理想情况下,处理器存储器应完全分开(即单端口存储器),以最大程度地减少竞争。但是,这样可以最大程度地减少内存共享。通过为所有处理器使用单个共享内存来实现理想的共享,但这将争用最大化。取而代之的是,在本文中,我们表明,只要处理器数量少于内存数量,就可以使用两个端口的内存集合来实现共享内存的完美内存共享。我们表明分配问题通常是NP完全的,但是具有一个渐近3/2因子的快速近似算法。该证明使用了一种新的垃圾箱包装模型,这本身就很有趣。此外,对于在实践中出现的重要特殊情况,实际上最好是对该近似算法进行更复杂的修改。我们还将讨论在线内存分配问题,并提出快速的在线算法,这些算法可提供良好的内存利用率,同时允许快速更新。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号