首页> 外文会议>AAAI Conference on Artificial Intelligence >Aggregated Gradient Langevin Dynamics
【24h】

Aggregated Gradient Langevin Dynamics

机译:聚合渐变兰富文动力学

获取原文

摘要

In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling. We investigate the nonasymptotic convergence of AGLD with a unified analysis for different data accessing (e.g. random access, cyclic access and random reshuffle) and snapshot updating strategies, under convex and nonconvex settings respectively. It is the first time that bounds for I/O friendly strategies such as cyclic access and random reshuffle have been established in the MCMC literature. The theoretic results also indicate that methods in AGLD possess the merits of both the low per-iteration computational complexity and the short mixture time. Empirical studies demonstrate that our framework allows to derive novel schemes to generate high-quality samples for large-scale Bayesian posterior learning tasks.
机译:在本文中,我们探索了Markov Chain Monte Carlo(MCMC)采样的一般聚合渐变Langevin Dynamics框架(AGLD)。 我们研究了AGLD的非血症融合,并分别在凸和非凸版设置下的不同数据访问(例如随机访问,循环访问和随机重新制作)和快照更新策略。 在MCMC文献中,已经建立了第一次为I / O友好策略(如循环访问和随机重建)的界限。 理论结果还表明AGLD中的方法具有低偏移计算复杂性和短混合时间的优点。 实证研究表明,我们的框架允许导出新颖的方案来为大型贝叶斯后学习任务产生高质量样本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号