首页> 外文会议>Conference on Uncertainty in Artificial Intelligence >Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error
【24h】

Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error

机译:具有受控梯度近似误差的稀疏离散马尔可夫随机字段的随机学习

获取原文

摘要

We study the L_1-regularized maximum likelihood estimator/estimation (MLE) problem for discrete Markov random fields (MRFs), where efficient and scalable learning requires both sparse regularization and approximate inference. To address these challenges, we consider a stochastic learning framework called stochastic proximal gradient (SPG; Honorio 2012a, Atchade et al. 2014, Miasojedow and Rejchel 2016). SPG is an inexact proximal gradient algorithm [Schmidt et al., 2011], whose inexactness stems from the stochastic oracle (Gibbs sampling) for gradient approximation - exact gradient evaluation is infeasible in general due to the NP-hard inference problem for discrete MRFs [Koller and Friedman, 2009]. Theoretically, we provide novel verifiable bounds to inspect and control the quality of gradient approximation. Empirically, we propose the tighten asymptotically (TAY) learning strategy based on the verifiable bounds to boost the performance of SPG.
机译:我们研究了离散的马尔可夫随机字段(MRFS)的L_1正常化的最大可能性估计/估计(MLE)问题,其中有效和可扩展的学习需要稀疏正则化和近似推断。为了解决这些挑战,我们考虑一个称为随机近端梯度的随机学习框架(SPG; 2012A,Atchade等,2014年,Miasojedow和Rejchel 2016)。 SPG是一种不精确的近端梯度算法[Schmidt等,2011],其不精确源于梯度近似的随机甲状腺(GIBBS采样) - 由于离散MRF的NP硬度推断问题,确切梯度评估是不可行的[ Koller和Friedman,2009]从理论上讲,我们提供了新的可验证界,以检查和控制梯度近似的质量。经验上,我们提出了基于可验证范围的基于可验证界限的渐近(TAY)学习策略来提高SPG的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号