...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Gradient Boosting on Stochastic Data Streams
【24h】

Gradient Boosting on Stochastic Data Streams

机译:梯度升压随机数据流

获取原文
           

摘要

Boosting is a popular ensemble algorithm that generates more powerful learners by linearly combining base models from a simpler hypothesis class. In this work, we investigate the problem of adapting batch gradient boosting for minimizing convex loss functions to online setting where the loss at each iteration is i.i.d sampled from an unknown distribution. To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners. We further present an adaptation of SGB to optimize non-smooth loss functions, for which we derive a $O(ln(N)/N)$ convergence rate. We also show that our analysis can extend to adversarial online learning setting under a stronger assumption that the online weak learning edge will hold in adversarial setting. We finally demonstrate experimental results showing that in practice our algorithms can achieve competitive results as classic gradient boosting while using less computation.
机译:Boosting是一种流行的集合算法,通过从更简单的假设类线性地组合基础模型来生成更强大的学习者。在这项工作中,我们调查了调整批量渐变升级的问题,以使凸损函数最小化到在线设置,其中每个迭代的丢失是i.i.d从未知分发中采样。要从批量概括到在线,我们首先介绍了在线弱学习边缘的定义,其中对于强凸和平滑的损失功能,我们呈现了一种算法,流梯度提升(SGB),具有指数收缩率的弱收员。我们进一步提出了SGB的适应,以优化非平滑损耗功能,我们从中获得$ O( ln(n)/ n)$收敛率。我们还表明,我们的分析可以在更强大的假设下扩展到对抗的在线学习环境,即在线弱学习边缘将持有对抗环境。我们终于展示了实验结果表明,在实践中,我们的算法可以在使用较少计算时将竞争结果达到经典渐变升压。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号