【24h】

Fast Text Compression with Neural Networks

机译:神经网络的快速文本压缩

获取原文
获取原文并翻译 | 示例

摘要

Neural networks have the potential to extend data compression algorithms beyond the character level n-gram models now in use, but have usually been avoided because they are too slow to be practical. We introduce a model that produces better compression than popular Limpel-Ziv compressors (zip, gzip, compress), and is competitive in time, space, and compression ratio with PPM and Burrows-Wheeler algorithms, currently the best known. The compressor, a bit-level predictive arithmetic encoder using a 2 layer, 4 x 10~6 by 1 network, is fast (about 10~4 characters/second) because only 4-5 connections are simultaneously active and because it uses a variable learning rate optimized for one-pass training.
机译:神经网络有可能将数据压缩算法扩展到目前正在使用的字符级n-gram模型之外,但是通常会避免使用神经网络,因为它们太慢了,无法实用。我们介绍了一个模型,该模型比流行的Limpel-Ziv压缩器(zip,gzip,compress)产生更好的压缩,并且在时间,空间和压缩率方面与PPM和Burrows-Wheeler算法(目前最为人所知)具有竞争力。压缩器是使用2层4 x 10〜6 x 1网络的位级预测算术编码器,它的速度很快(大约10〜4个字符/秒),因为只有4-5个连接同时处于活动状态,并且使用了变量为单次训练优化的学习率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号