首页> 外文会议>Conference on Uncertainty in Artificial Intelligence >Learning to Draw Samples with Amortized Stein Variational Gradient Descent
【24h】

Learning to Draw Samples with Amortized Stein Variational Gradient Descent

机译:学习用摊销斯坦变形梯度下降绘制样品

获取原文

摘要

We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient direction (Liu & Wang, 2016) that maximally decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. We demonstrate our method with a number of applications, including variational autoencoder (VAE) with expressive encoders to model complex latent space structures, and hyper-parameter learning of MCMC samplers that allows Bayesian inference to adaptively improve itself when seeing more data.
机译:我们提出了一种简单的算法来训练随机神经网络,从给定的目标分布绘制样品以进行概率推断。我们的方法是基于迭代地调整神经网络参数,使得沿着斯坦坦变分梯度方向(Liu&Wang,2016)的输出变化,从而最大程度地降低了靶分布的KL发散。我们的方法适用于由其无通量密度函数指定的任何目标分布,并且可以在我们想要适应的参数方面培训任何可差异的黑匣子架构。我们展示了许多应用程序的方法,包括具有富有仿真编码器的变形AutoEncoder(VAE),以模拟复杂的潜在空间结构,以及MCMC采样器的超参数学习,允许贝叶斯推断在看到更多数据时自适应地改善自身。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号