...
首页> 外文期刊>Information Theory, IEEE Transactions on >Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
【24h】

Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence

机译:在线学习作为正则化路径的随机逼近:最优性和几乎确定的收敛

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a regularization path converging to the regression function in reproducing kernel Hilbert spaces (RKHSs). We show that it is possible to produce the best known strong (RKHS norm) convergence rate of batch learning, through a careful choice of the gain or step size sequences, depending on regularity assumptions on the regression function. The corresponding weak (mean square distance) convergence rate is optimal in the sense that it reaches the minimax and individual lower rates in this paper. In both cases, we deduce almost sure convergence, using Bernstein-type inequalities for martingales in Hilbert spaces. To achieve this, we develop a bias-variance decomposition similar to the batch learning setting; the bias consists in the approximation and drift errors along the regularization path, which display the same rates of convergence, and the variance arises from the sample error analyzed as a (reverse) martingale difference sequence. The rates above are obtained by an optimal tradeoff between the bias and the variance.
机译:在本文中,提出了一种在线学习算法,作为正则化路径的顺序随机逼近,收敛于回归核希尔伯特空间(RKHS)中的回归函数。我们表明,可以通过仔细选择增益或步长序列,从而根据回归函数的正则性假设,来产生最著名的批处理学习的强(RKHS规范)收敛速度。在本文中,相应的弱(均方根距离)收敛速度在达到极小最大值和个别较低速度的意义上是最佳的。在这两种情况下,我们都使用希尔伯特空间中mar的伯恩斯坦型不等式推论出几乎确定的收敛性。为此,我们开发了类似于批处理学习设置的偏差方差分解;偏差包括沿着正则化路径的逼近误差和漂移误差,它们显示出相同的收敛速度,而方差是由作为(反向)mar差异序列分析的样本误差引起的。以上比率是通过偏差和方差之间的最佳折衷获得的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号