...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning from Conditional Distributions via Dual Embeddings
【24h】

Learning from Conditional Distributions via Dual Embeddings

机译:通过双嵌入式从条件分布中学习

获取原文
           

摘要

Many machine learning tasks, such as learning with invariance and policy evaluation in reinforcement learning, can be characterized as problems of learning from conditional distributions. In such problems, each sample x itself is associated with a conditional distribution $p(z|x)$ represented by samples $z_i_i=1^M$, and the goal is to learn a function f that links these conditional distributions to target values y. These problems become very challenging when we only have limited samples or in the extreme case only one sample from each conditional distribution. Commonly used approaches either assume that z is independent of x, or require an overwhelmingly large set of samples from each conditional distribution. To address these challenges, we propose a novel approach which employs a new min-max reformulation of the learning from conditional distribution problem. With such new reformulation, we only need to deal with the joint distribution p(z,x). We also design an efficient learning algorithm, Embedding-SGD, and establish theoretical sample complexity for such problems. Finally, our numerical experiments, on both synthetic and real-world datasets, show that the proposed approach can significantly improve over existing algorithms.
机译:许多机器学习任务,例如在加强学习中学习不变性和政策评估,可以被称为从条件分布中学习的问题。在这些问题中,每个样本x本身与由样本$ z_i_i = 1 ^ m $表示的条件分发$ p(z | x)$相关联,并且目标是学习将这些条件分布链接到目标值的函数f y。当我们只有有限的样品或极端情况下只有一个来自每个条件分布的样本时,这些问题变得非常挑战。通常使用的方法假设Z独立于X,或者需要从每个条件分布的压倒性大量样本。为解决这些挑战,我们提出了一种新的方法,雇用了从条件分布问题的学习的新闽最大重新制作。通过这种新的重构,我们只需要处理联合分布P(Z,X)。我们还设计了一种高效的学习算法,嵌入式SGD,并为这些问题建立理论上的样本复杂性。最后,我们的数值实验,在合成和现实世界数据集上,表明所提出的方法可以显着改善现有算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号