首页> 外文会议>Annual conference on Neural Information Processing Systems >Stochastic Variance Reduction Methods for Saddle-Point Problems
【24h】

Stochastic Variance Reduction Methods for Saddle-Point Problems

机译:鞍点问题的随机方差减少方法

获取原文

摘要

We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which are common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.
机译:我们考虑了凸凹鞍点问题,其中目标函数可能会分解为许多分量,并扩展了最近的随机方差减少方法(例如SVRG或SAGA),从而为此类问题提供了第一个大规模线性收敛算法,在机器学习中很常见。尽管算法扩展很简单,但它带来了挑战和机遇:(a)凸极小化分析不适用,我们使用单调算子的概念证明收敛性,尤其表明同一算法适用于更大的问题类别,例如变分不等式,(b)在功能上或在偏导数方面有两个分裂的概念,(c)确实需要用凸凹项完成分裂,(d)不均匀在理论和实践上,采样都是有效算法的关键,并且(e)使用“催化剂”框架的简单扩展,可以轻松地加速这些增量算法,从而使算法始终优于加速批处理算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号