【24h】

Adaptive and Self-Confident On-Line Learning Algorithms

机译:自适应自信在线学习算法

获取原文
获取原文并翻译 | 示例

摘要

Most of the performance bounds for on-line learning algorithms are proven assuming a constant learning rate. To optimize these bounds, the learning rate must be tuned based on quantities that are generally unknown, as they depend on the whole sequence of examples. In this paper we show that essentially the same optimized bounds can be obtained when the algorithms adaptively tune their learning rates as the examples in the sequence are progressively revealed. Our adaptive learning rates apply to a wide class of on-line algorithms, including p-norm algorithms for generalized linear regression and Weighted Majority for linear regression with absolute loss. We emphasize that our adaptive tunings are radically different from previous techniques, such as the so-called doubling trick. Whereas the doubling trick restarts the on-line algorithm several times using a constant learning rate for each run, our methods save information by changing the value of the learning rate very smoothly. In fact, for Weighted Majority over a finite set of experts our analysis provides a better leading constant than the doubling trick.
机译:在线学习算法的大多数性能范围都在假定恒定学习率的情况下被证明。为了优化这些界限,必须基于通常未知的数量来调整学习率,因为它们取决于示例的整个序列。在本文中,我们证明,当算法逐步显示序列中的示例时,当算法自适应地调整其学习速率时,可以获得基本相同的优化范围。我们的自适应学习率适用于各种各样的在线算法,包括用于广义线性回归的p范数算法和用于具有绝对损失的线性回归的加权多数算法。我们强调,我们的自适应调整与以前的技术(例如所谓的倍增技巧)根本不同。尽管加倍技巧会使用每次运行的恒定学习率来重新启动在线算法几次,但我们的方法通过非常平稳地更改学习率的值来保存信息。实际上,对于有限专家组中的加权多数,我们的分析提供了比加倍技巧更好的领先常数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号