首页> 外文学位 >Extending caching for two applications: Disseminating live data and accessing data from disks.
【24h】

Extending caching for two applications: Disseminating live data and accessing data from disks.

机译:扩展了两个应用程序的缓存:分发实时数据和从磁盘访问数据。

获取原文
获取原文并翻译 | 示例

摘要

In this dissertation, we extend caching for two applications. For one application, we extend caching by proposing an architecture for creating a hierarchy of dynamically created caches. For the second application, we extend caching by proposing a cache management policy for a cache that prefetches, predicted data.; In the first part of this dissertation, we address the problem of disseminating fast changing data to a large number of users. Due to the fast changing nature of the data, caching data at clients is not very effective. Additionally, the static nature in which Web caches are configured makes it hard for such caches to adapt to dynamically changing access patterns. In our analysis of the World Cup Soccer data, we found that a large fraction of the clients accessing fast changing data belong to a small fraction of network domains. Our solution to disseminating fast changing data, domain caching, exploits this property by dynamically creating caches when a large number of clients from a domain access the same data. Further, the server instructs all clients in the domain to access data from a selected client, the domain cache .; In the second part of this dissertation, we address the problem of accessing data from a disk farm. Prefetching is one approach to exploit the parallelism provided by a disk farm. Due to the difficulty in predicting future data accesses, prefetching data from disks is hard. Further, all predicted data are guaranteed to be accessed. In this dissertation, we propose and evaluate a cache management policy that integrates caching and prefetching by managing cached data and prefetched data separately. Our cache management policy evaluates the relative values of cached data and prefetched data. Finally, we use this information to identify the least valuable cached block.
机译:在本文中,我们将缓存扩展到两个应用程序。对于一个应用程序,我们通过提出一种用于创建动态创建的缓存层次结构的体系结构来扩展缓存。对于第二个应用程序,我们通过为预取预测数据的缓存提出一个缓存管理策略来扩展缓存。在本文的第一部分,我们解决了将快速变化的数据分发给大量用户的问题。由于数据的快速变化性质,在客户端缓存数据不是很有效。此外,配置Web缓存的静态特性使此类缓存很难适应动态变化的访问模式。在对世界杯足球赛数据的分析中,我们发现访问快速变化的数据的客户端中有很大一部分属于网络域的一小部分。当来自域的大量客户端访问相同数据时,我们的用于发布快速变化数据的解决方案域缓存通过动态创建缓存来利用此属性。此外,服务器指示域中的所有客户端从选定的客户端域缓存访问数据。在本文的第二部分,我们解决了从磁盘场访问数据的问题。预取是一种利用磁盘场提供的并行性的方法。由于难以预测将来的数据访问,因此很难从磁盘预取数据。此外,保证所有预测数据都可以访问。本文提出并评估了一种缓存管理策略,该策略通过分别管理缓存数据和预取数据来集成缓存和预取。我们的缓存管理策略会评估缓存数据和预取数据的相对值。最后,我们使用此信息来确定最有价值的缓存块。

著录项

  • 作者

    Vellanki, Vivekanand.;

  • 作者单位

    Georgia Institute of Technology.;

  • 授予单位 Georgia Institute of Technology.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2001
  • 页码 225 p.
  • 总页数 225
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号